Feb 02 13:02:27 crc systemd[1]: Starting Kubernetes Kubelet... Feb 02 13:02:28 crc restorecon[4688]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:02:28 crc restorecon[4688]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:02:28 crc restorecon[4688]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 02 13:02:29 crc kubenswrapper[4955]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 13:02:29 crc kubenswrapper[4955]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 02 13:02:29 crc kubenswrapper[4955]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 13:02:29 crc kubenswrapper[4955]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 13:02:29 crc kubenswrapper[4955]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 02 13:02:29 crc kubenswrapper[4955]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.493861 4955 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499510 4955 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499530 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499535 4955 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499540 4955 feature_gate.go:330] unrecognized feature gate: Example Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499545 4955 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499550 4955 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499573 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499579 4955 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499584 4955 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499591 4955 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499597 4955 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499601 4955 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499604 4955 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499608 4955 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499612 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499617 4955 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499621 4955 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499626 4955 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499640 4955 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499644 4955 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499647 4955 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499651 4955 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499655 4955 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499658 4955 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499662 4955 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499665 4955 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499669 4955 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499672 4955 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499676 4955 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499679 4955 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499683 4955 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499687 4955 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499691 4955 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499695 4955 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499698 4955 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499702 4955 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499705 4955 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499709 4955 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499712 4955 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499716 4955 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499722 4955 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499727 4955 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499731 4955 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499737 4955 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499743 4955 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499748 4955 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499752 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499756 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499760 4955 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499764 4955 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499768 4955 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499771 4955 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499775 4955 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499780 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499784 4955 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499789 4955 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499794 4955 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499798 4955 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499803 4955 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499807 4955 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499811 4955 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499815 4955 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499819 4955 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499824 4955 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499829 4955 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499833 4955 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499838 4955 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499843 4955 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499847 4955 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499852 4955 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.499857 4955 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500735 4955 flags.go:64] FLAG: --address="0.0.0.0" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500753 4955 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500763 4955 flags.go:64] FLAG: --anonymous-auth="true" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500770 4955 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500776 4955 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500780 4955 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500786 4955 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500792 4955 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500797 4955 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500801 4955 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500806 4955 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500810 4955 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500814 4955 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500818 4955 flags.go:64] FLAG: --cgroup-root="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500823 4955 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500827 4955 flags.go:64] FLAG: --client-ca-file="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500831 4955 flags.go:64] FLAG: --cloud-config="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500835 4955 flags.go:64] FLAG: --cloud-provider="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500839 4955 flags.go:64] FLAG: --cluster-dns="[]" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500844 4955 flags.go:64] FLAG: --cluster-domain="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500848 4955 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500853 4955 flags.go:64] FLAG: --config-dir="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500858 4955 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500864 4955 flags.go:64] FLAG: --container-log-max-files="5" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500871 4955 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500876 4955 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500881 4955 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500885 4955 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500889 4955 flags.go:64] FLAG: --contention-profiling="false" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500893 4955 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500898 4955 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500902 4955 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500906 4955 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500912 4955 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500916 4955 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500920 4955 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500924 4955 flags.go:64] FLAG: --enable-load-reader="false" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500929 4955 flags.go:64] FLAG: --enable-server="true" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500933 4955 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500939 4955 flags.go:64] FLAG: --event-burst="100" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500945 4955 flags.go:64] FLAG: --event-qps="50" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500949 4955 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500953 4955 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500957 4955 flags.go:64] FLAG: --eviction-hard="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500963 4955 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500968 4955 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500972 4955 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500976 4955 flags.go:64] FLAG: --eviction-soft="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500980 4955 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500984 4955 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500988 4955 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500992 4955 flags.go:64] FLAG: --experimental-mounter-path="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.500997 4955 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501001 4955 flags.go:64] FLAG: --fail-swap-on="true" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501006 4955 flags.go:64] FLAG: --feature-gates="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501012 4955 flags.go:64] FLAG: --file-check-frequency="20s" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501016 4955 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501021 4955 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501026 4955 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501030 4955 flags.go:64] FLAG: --healthz-port="10248" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501034 4955 flags.go:64] FLAG: --help="false" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501039 4955 flags.go:64] FLAG: --hostname-override="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501043 4955 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501047 4955 flags.go:64] FLAG: --http-check-frequency="20s" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501051 4955 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501055 4955 flags.go:64] FLAG: --image-credential-provider-config="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501059 4955 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501064 4955 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501068 4955 flags.go:64] FLAG: --image-service-endpoint="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501072 4955 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501076 4955 flags.go:64] FLAG: --kube-api-burst="100" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501080 4955 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501085 4955 flags.go:64] FLAG: --kube-api-qps="50" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501089 4955 flags.go:64] FLAG: --kube-reserved="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501094 4955 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501098 4955 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501103 4955 flags.go:64] FLAG: --kubelet-cgroups="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501107 4955 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501111 4955 flags.go:64] FLAG: --lock-file="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501115 4955 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501119 4955 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501123 4955 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501129 4955 flags.go:64] FLAG: --log-json-split-stream="false" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501133 4955 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501137 4955 flags.go:64] FLAG: --log-text-split-stream="false" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501141 4955 flags.go:64] FLAG: --logging-format="text" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501145 4955 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501150 4955 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501154 4955 flags.go:64] FLAG: --manifest-url="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501158 4955 flags.go:64] FLAG: --manifest-url-header="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501163 4955 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501168 4955 flags.go:64] FLAG: --max-open-files="1000000" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501173 4955 flags.go:64] FLAG: --max-pods="110" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501177 4955 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501181 4955 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501185 4955 flags.go:64] FLAG: --memory-manager-policy="None" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501189 4955 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501193 4955 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501198 4955 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501202 4955 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501213 4955 flags.go:64] FLAG: --node-status-max-images="50" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501217 4955 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501222 4955 flags.go:64] FLAG: --oom-score-adj="-999" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501226 4955 flags.go:64] FLAG: --pod-cidr="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501230 4955 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501237 4955 flags.go:64] FLAG: --pod-manifest-path="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501241 4955 flags.go:64] FLAG: --pod-max-pids="-1" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501245 4955 flags.go:64] FLAG: --pods-per-core="0" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501249 4955 flags.go:64] FLAG: --port="10250" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501254 4955 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501258 4955 flags.go:64] FLAG: --provider-id="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501262 4955 flags.go:64] FLAG: --qos-reserved="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501266 4955 flags.go:64] FLAG: --read-only-port="10255" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501270 4955 flags.go:64] FLAG: --register-node="true" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501274 4955 flags.go:64] FLAG: --register-schedulable="true" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501278 4955 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501285 4955 flags.go:64] FLAG: --registry-burst="10" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501289 4955 flags.go:64] FLAG: --registry-qps="5" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501293 4955 flags.go:64] FLAG: --reserved-cpus="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501297 4955 flags.go:64] FLAG: --reserved-memory="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501316 4955 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501320 4955 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501325 4955 flags.go:64] FLAG: --rotate-certificates="false" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501329 4955 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501333 4955 flags.go:64] FLAG: --runonce="false" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501337 4955 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501341 4955 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501346 4955 flags.go:64] FLAG: --seccomp-default="false" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501350 4955 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501354 4955 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501358 4955 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501363 4955 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501367 4955 flags.go:64] FLAG: --storage-driver-password="root" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501371 4955 flags.go:64] FLAG: --storage-driver-secure="false" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501375 4955 flags.go:64] FLAG: --storage-driver-table="stats" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501379 4955 flags.go:64] FLAG: --storage-driver-user="root" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501383 4955 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501388 4955 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501392 4955 flags.go:64] FLAG: --system-cgroups="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501396 4955 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501402 4955 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501406 4955 flags.go:64] FLAG: --tls-cert-file="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501410 4955 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501414 4955 flags.go:64] FLAG: --tls-min-version="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501418 4955 flags.go:64] FLAG: --tls-private-key-file="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501423 4955 flags.go:64] FLAG: --topology-manager-policy="none" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501427 4955 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501431 4955 flags.go:64] FLAG: --topology-manager-scope="container" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501435 4955 flags.go:64] FLAG: --v="2" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501441 4955 flags.go:64] FLAG: --version="false" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501446 4955 flags.go:64] FLAG: --vmodule="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501451 4955 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501456 4955 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501576 4955 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501582 4955 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501587 4955 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501593 4955 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501598 4955 feature_gate.go:330] unrecognized feature gate: Example Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501602 4955 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501607 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501611 4955 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501615 4955 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501619 4955 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501624 4955 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501628 4955 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501632 4955 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501636 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501640 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501644 4955 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501649 4955 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501653 4955 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501657 4955 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501661 4955 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501665 4955 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501668 4955 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501672 4955 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501675 4955 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501679 4955 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501683 4955 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501686 4955 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501690 4955 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501694 4955 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501697 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501701 4955 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501705 4955 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501708 4955 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501715 4955 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501718 4955 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501723 4955 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501727 4955 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501732 4955 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501736 4955 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501739 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501743 4955 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501748 4955 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501752 4955 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501755 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501759 4955 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501762 4955 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501766 4955 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501770 4955 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501774 4955 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501777 4955 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501781 4955 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501785 4955 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501788 4955 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501792 4955 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501795 4955 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501798 4955 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501802 4955 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501805 4955 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501809 4955 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501813 4955 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501816 4955 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501819 4955 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501823 4955 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501827 4955 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501831 4955 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501836 4955 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501840 4955 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501843 4955 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501847 4955 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501850 4955 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.501853 4955 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.501867 4955 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.510923 4955 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.510964 4955 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511046 4955 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511057 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511065 4955 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511073 4955 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511080 4955 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511088 4955 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511096 4955 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511102 4955 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511108 4955 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511113 4955 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511118 4955 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511123 4955 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511128 4955 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511133 4955 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511138 4955 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511145 4955 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511153 4955 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511159 4955 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511164 4955 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511170 4955 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511174 4955 feature_gate.go:330] unrecognized feature gate: Example Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511179 4955 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511184 4955 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511189 4955 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511194 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511199 4955 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511204 4955 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511209 4955 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511214 4955 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511219 4955 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511224 4955 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511268 4955 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511274 4955 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511280 4955 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511289 4955 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511294 4955 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511300 4955 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511305 4955 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511311 4955 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511317 4955 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511322 4955 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511328 4955 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511336 4955 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511342 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511348 4955 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511353 4955 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511358 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511363 4955 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511368 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511373 4955 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511378 4955 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511383 4955 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511389 4955 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511394 4955 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511399 4955 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511404 4955 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511409 4955 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511414 4955 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511418 4955 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511423 4955 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511428 4955 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511433 4955 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511438 4955 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511443 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511449 4955 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511455 4955 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511461 4955 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511467 4955 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511472 4955 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511478 4955 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511484 4955 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.511493 4955 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511684 4955 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511701 4955 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511706 4955 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511712 4955 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511716 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511721 4955 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511726 4955 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511731 4955 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511736 4955 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511741 4955 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511746 4955 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511751 4955 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511756 4955 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511761 4955 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511766 4955 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511771 4955 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511776 4955 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511781 4955 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511786 4955 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511791 4955 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511798 4955 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511804 4955 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511810 4955 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511815 4955 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511821 4955 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511826 4955 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511830 4955 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511835 4955 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511840 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511845 4955 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511850 4955 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511854 4955 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511859 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511864 4955 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511870 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511874 4955 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511879 4955 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511884 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511890 4955 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511896 4955 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511901 4955 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511906 4955 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511911 4955 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511916 4955 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511921 4955 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511927 4955 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511933 4955 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511939 4955 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511945 4955 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511950 4955 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511956 4955 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511962 4955 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511967 4955 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511973 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511978 4955 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511983 4955 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511988 4955 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511993 4955 feature_gate.go:330] unrecognized feature gate: Example Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.511998 4955 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.512002 4955 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.512008 4955 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.512012 4955 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.512018 4955 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.512022 4955 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.512027 4955 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.512032 4955 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.512037 4955 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.512041 4955 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.512046 4955 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.512051 4955 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.512056 4955 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.512065 4955 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.513094 4955 server.go:940] "Client rotation is on, will bootstrap in background" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.517240 4955 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.517327 4955 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.518733 4955 server.go:997] "Starting client certificate rotation" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.518752 4955 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.518964 4955 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-08 13:44:18.5477526 +0000 UTC Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.519077 4955 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.547126 4955 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 13:02:29 crc kubenswrapper[4955]: E0202 13:02:29.550085 4955 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.86:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.553287 4955 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.574674 4955 log.go:25] "Validated CRI v1 runtime API" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.608815 4955 log.go:25] "Validated CRI v1 image API" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.611575 4955 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.616709 4955 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-02-12-58-12-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.616746 4955 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.633402 4955 manager.go:217] Machine: {Timestamp:2026-02-02 13:02:29.630645819 +0000 UTC m=+0.542982269 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a1e684ba-38f3-4fac-88c1-b29f9c39bcf4 BootID:98377b24-b2ba-4f17-bb8d-a7b7a933930f Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:b9:18:4b Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:b9:18:4b Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:5d:08:c5 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:d4:02:78 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:64:ac:10 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:29:a0:53 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:2e:ef:5f:4d:7a:ee Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:fe:ec:07:55:1f:40 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.633671 4955 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.633972 4955 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.635264 4955 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.635446 4955 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.635478 4955 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.635701 4955 topology_manager.go:138] "Creating topology manager with none policy" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.635712 4955 container_manager_linux.go:303] "Creating device plugin manager" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.636154 4955 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.636183 4955 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.636437 4955 state_mem.go:36] "Initialized new in-memory state store" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.636537 4955 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.639924 4955 kubelet.go:418] "Attempting to sync node with API server" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.639945 4955 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.639966 4955 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.639978 4955 kubelet.go:324] "Adding apiserver pod source" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.639988 4955 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.644696 4955 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.86:6443: connect: connection refused Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.644706 4955 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.86:6443: connect: connection refused Feb 02 13:02:29 crc kubenswrapper[4955]: E0202 13:02:29.644774 4955 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.86:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:02:29 crc kubenswrapper[4955]: E0202 13:02:29.644791 4955 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.86:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.646750 4955 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.647954 4955 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.649601 4955 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.651241 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.651286 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.651301 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.651353 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.651378 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.651392 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.651407 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.651430 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.651446 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.651460 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.651506 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.651525 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.653648 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.654436 4955 server.go:1280] "Started kubelet" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.655304 4955 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.655437 4955 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.656211 4955 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.656884 4955 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.86:6443: connect: connection refused Feb 02 13:02:29 crc systemd[1]: Started Kubernetes Kubelet. Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.658226 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.658747 4955 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.658847 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 04:45:12.508053771 +0000 UTC Feb 02 13:02:29 crc kubenswrapper[4955]: E0202 13:02:29.659173 4955 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.659670 4955 server.go:460] "Adding debug handlers to kubelet server" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.659701 4955 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.659685 4955 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.659873 4955 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 02 13:02:29 crc kubenswrapper[4955]: E0202 13:02:29.659942 4955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.86:6443: connect: connection refused" interval="200ms" Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.660333 4955 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.86:6443: connect: connection refused Feb 02 13:02:29 crc kubenswrapper[4955]: E0202 13:02:29.660462 4955 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.86:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.660869 4955 factory.go:55] Registering systemd factory Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.661360 4955 factory.go:221] Registration of the systemd container factory successfully Feb 02 13:02:29 crc kubenswrapper[4955]: E0202 13:02:29.660383 4955 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.86:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18906f976952f930 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 13:02:29.654395184 +0000 UTC m=+0.566731674,LastTimestamp:2026-02-02 13:02:29.654395184 +0000 UTC m=+0.566731674,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.662352 4955 factory.go:153] Registering CRI-O factory Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.662466 4955 factory.go:221] Registration of the crio container factory successfully Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.662664 4955 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.662783 4955 factory.go:103] Registering Raw factory Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.662888 4955 manager.go:1196] Started watching for new ooms in manager Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.663824 4955 manager.go:319] Starting recovery of all containers Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674544 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674633 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674650 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674665 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674679 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674692 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674707 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674720 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674736 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674749 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674762 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674775 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674788 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674805 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674818 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674832 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674846 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674862 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674914 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674927 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674940 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674958 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674971 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674984 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.674997 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675010 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675025 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675039 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675054 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675067 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675079 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675092 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675109 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675121 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675134 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675148 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675162 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675174 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675189 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675207 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675224 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675237 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675250 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675262 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675276 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675289 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675303 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675320 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675359 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675375 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675389 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675429 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675447 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675462 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675475 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675490 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675504 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675517 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675530 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675543 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675575 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675594 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675643 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.675663 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678513 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678549 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678593 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678609 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678622 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678634 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678648 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678661 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678676 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678697 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678710 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678722 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678736 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678749 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678763 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678779 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678794 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678807 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678821 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678834 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678849 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678863 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678877 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678890 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678904 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678920 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678933 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678947 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678961 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678974 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.678986 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.679000 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.679015 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.679030 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.679043 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.679056 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.679070 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.679084 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.679097 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.679111 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.679131 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.679150 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.679165 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.679179 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.679192 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.679208 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.679224 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.679241 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681138 4955 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681204 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681232 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681253 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681273 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681292 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681311 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681329 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681346 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681363 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681382 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681401 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681418 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681435 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681453 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681470 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681490 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681508 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681525 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681545 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681627 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681648 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681665 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681683 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681700 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681718 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681737 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681761 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681790 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681826 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681908 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681930 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681953 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681972 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.681993 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682014 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682032 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682053 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682074 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682092 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682110 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682128 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682147 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682166 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682185 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682204 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682222 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682241 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682260 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682280 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682297 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682317 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682335 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682353 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682371 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682389 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682407 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682424 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682443 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682461 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682478 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682498 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682517 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682536 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682597 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682616 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682637 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682655 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682673 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682691 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682712 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682729 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682746 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682764 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682782 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682799 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682817 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682835 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682852 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682871 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682889 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682906 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682925 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682943 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682961 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682980 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.682999 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.683015 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.683034 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.683052 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.683069 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.683087 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.683107 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.683125 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.683143 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.683161 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.683179 4955 reconstruct.go:97] "Volume reconstruction finished" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.683191 4955 reconciler.go:26] "Reconciler: start to sync state" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.684271 4955 manager.go:324] Recovery completed Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.694525 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.697163 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.697205 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.697217 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.698612 4955 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.698630 4955 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.698652 4955 state_mem.go:36] "Initialized new in-memory state store" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.713450 4955 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.714936 4955 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.714992 4955 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.715021 4955 kubelet.go:2335] "Starting kubelet main sync loop" Feb 02 13:02:29 crc kubenswrapper[4955]: E0202 13:02:29.715090 4955 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 02 13:02:29 crc kubenswrapper[4955]: W0202 13:02:29.716175 4955 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.86:6443: connect: connection refused Feb 02 13:02:29 crc kubenswrapper[4955]: E0202 13:02:29.716285 4955 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.86:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.719523 4955 policy_none.go:49] "None policy: Start" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.720355 4955 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.720389 4955 state_mem.go:35] "Initializing new in-memory state store" Feb 02 13:02:29 crc kubenswrapper[4955]: E0202 13:02:29.760252 4955 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.781732 4955 manager.go:334] "Starting Device Plugin manager" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.781781 4955 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.781795 4955 server.go:79] "Starting device plugin registration server" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.782277 4955 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.782305 4955 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.782455 4955 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.782645 4955 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.782664 4955 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 02 13:02:29 crc kubenswrapper[4955]: E0202 13:02:29.789726 4955 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.815919 4955 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.816021 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.817223 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.817260 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.817272 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.817372 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.818341 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.818381 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.818392 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.818603 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.818739 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.818746 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.818778 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.819636 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.821687 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.821719 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.821729 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.822598 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.822628 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.822640 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.823041 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.823072 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.823083 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.823228 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.823353 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.823394 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.823925 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.823950 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.823962 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.824147 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.824275 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.824314 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.824274 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.824377 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.824388 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.826929 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.826953 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.826964 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.826979 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.827004 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.827014 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.827200 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.827240 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.829413 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.829461 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.829474 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:29 crc kubenswrapper[4955]: E0202 13:02:29.860873 4955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.86:6443: connect: connection refused" interval="400ms" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.882432 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.883635 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.883695 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.883715 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.883747 4955 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 13:02:29 crc kubenswrapper[4955]: E0202 13:02:29.884296 4955 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.86:6443: connect: connection refused" node="crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.885506 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.885576 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.885647 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.885680 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.885706 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.885728 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.885746 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.885769 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.885797 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.885831 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.885864 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.885885 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.885907 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.885927 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.885946 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.986669 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.986710 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.986751 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.986767 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.986814 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.986820 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.986863 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.986815 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.986879 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.986908 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.986916 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.986931 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.986954 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.986929 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.986969 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.986988 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.987025 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.987028 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.986919 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.987027 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.987136 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.987045 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.987166 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.987182 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.987219 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.987256 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.987296 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.987297 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.987260 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:02:29 crc kubenswrapper[4955]: I0202 13:02:29.987586 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:02:30 crc kubenswrapper[4955]: I0202 13:02:30.084411 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:30 crc kubenswrapper[4955]: I0202 13:02:30.086091 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:30 crc kubenswrapper[4955]: I0202 13:02:30.086167 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:30 crc kubenswrapper[4955]: I0202 13:02:30.086191 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:30 crc kubenswrapper[4955]: I0202 13:02:30.086239 4955 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 13:02:30 crc kubenswrapper[4955]: E0202 13:02:30.086860 4955 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.86:6443: connect: connection refused" node="crc" Feb 02 13:02:30 crc kubenswrapper[4955]: I0202 13:02:30.153034 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 13:02:30 crc kubenswrapper[4955]: I0202 13:02:30.158656 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 13:02:30 crc kubenswrapper[4955]: I0202 13:02:30.183609 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 13:02:30 crc kubenswrapper[4955]: I0202 13:02:30.204165 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:02:30 crc kubenswrapper[4955]: W0202 13:02:30.204512 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-2b1a9820ad85af89ba889660fe8bd421f6f11f0b73a889243882be81e66b420a WatchSource:0}: Error finding container 2b1a9820ad85af89ba889660fe8bd421f6f11f0b73a889243882be81e66b420a: Status 404 returned error can't find the container with id 2b1a9820ad85af89ba889660fe8bd421f6f11f0b73a889243882be81e66b420a Feb 02 13:02:30 crc kubenswrapper[4955]: W0202 13:02:30.207946 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-9a2223f0ae3d88301cfa3caeb4082f7762a3642ff086628aeb54bf5fbebbc2dc WatchSource:0}: Error finding container 9a2223f0ae3d88301cfa3caeb4082f7762a3642ff086628aeb54bf5fbebbc2dc: Status 404 returned error can't find the container with id 9a2223f0ae3d88301cfa3caeb4082f7762a3642ff086628aeb54bf5fbebbc2dc Feb 02 13:02:30 crc kubenswrapper[4955]: I0202 13:02:30.208167 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:02:30 crc kubenswrapper[4955]: W0202 13:02:30.216820 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-b682d032ad1dab44ae65fcbc3031bcd3b42495c4556a8f14c33432e47d445976 WatchSource:0}: Error finding container b682d032ad1dab44ae65fcbc3031bcd3b42495c4556a8f14c33432e47d445976: Status 404 returned error can't find the container with id b682d032ad1dab44ae65fcbc3031bcd3b42495c4556a8f14c33432e47d445976 Feb 02 13:02:30 crc kubenswrapper[4955]: W0202 13:02:30.217759 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-e5975cc1f2987f0ba2f34fc3eff82fb329eed0ece1261d5c4ab580f9190f9365 WatchSource:0}: Error finding container e5975cc1f2987f0ba2f34fc3eff82fb329eed0ece1261d5c4ab580f9190f9365: Status 404 returned error can't find the container with id e5975cc1f2987f0ba2f34fc3eff82fb329eed0ece1261d5c4ab580f9190f9365 Feb 02 13:02:30 crc kubenswrapper[4955]: W0202 13:02:30.227011 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-311409567cc54c03aace813f37db6eb28a418ecd22ada55c7640fa4676e7e8f7 WatchSource:0}: Error finding container 311409567cc54c03aace813f37db6eb28a418ecd22ada55c7640fa4676e7e8f7: Status 404 returned error can't find the container with id 311409567cc54c03aace813f37db6eb28a418ecd22ada55c7640fa4676e7e8f7 Feb 02 13:02:30 crc kubenswrapper[4955]: E0202 13:02:30.261928 4955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.86:6443: connect: connection refused" interval="800ms" Feb 02 13:02:30 crc kubenswrapper[4955]: I0202 13:02:30.487911 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:30 crc kubenswrapper[4955]: I0202 13:02:30.489512 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:30 crc kubenswrapper[4955]: I0202 13:02:30.489548 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:30 crc kubenswrapper[4955]: I0202 13:02:30.489574 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:30 crc kubenswrapper[4955]: I0202 13:02:30.489605 4955 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 13:02:30 crc kubenswrapper[4955]: E0202 13:02:30.490096 4955 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.86:6443: connect: connection refused" node="crc" Feb 02 13:02:30 crc kubenswrapper[4955]: W0202 13:02:30.650609 4955 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.86:6443: connect: connection refused Feb 02 13:02:30 crc kubenswrapper[4955]: E0202 13:02:30.650696 4955 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.86:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:02:30 crc kubenswrapper[4955]: I0202 13:02:30.658363 4955 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.86:6443: connect: connection refused Feb 02 13:02:30 crc kubenswrapper[4955]: I0202 13:02:30.659450 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 05:13:07.359861929 +0000 UTC Feb 02 13:02:30 crc kubenswrapper[4955]: W0202 13:02:30.672173 4955 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.86:6443: connect: connection refused Feb 02 13:02:30 crc kubenswrapper[4955]: E0202 13:02:30.672250 4955 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.86:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:02:30 crc kubenswrapper[4955]: I0202 13:02:30.719953 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9a2223f0ae3d88301cfa3caeb4082f7762a3642ff086628aeb54bf5fbebbc2dc"} Feb 02 13:02:30 crc kubenswrapper[4955]: I0202 13:02:30.721508 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"311409567cc54c03aace813f37db6eb28a418ecd22ada55c7640fa4676e7e8f7"} Feb 02 13:02:30 crc kubenswrapper[4955]: I0202 13:02:30.723057 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e5975cc1f2987f0ba2f34fc3eff82fb329eed0ece1261d5c4ab580f9190f9365"} Feb 02 13:02:30 crc kubenswrapper[4955]: I0202 13:02:30.724319 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b682d032ad1dab44ae65fcbc3031bcd3b42495c4556a8f14c33432e47d445976"} Feb 02 13:02:30 crc kubenswrapper[4955]: I0202 13:02:30.725686 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2b1a9820ad85af89ba889660fe8bd421f6f11f0b73a889243882be81e66b420a"} Feb 02 13:02:30 crc kubenswrapper[4955]: W0202 13:02:30.923488 4955 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.86:6443: connect: connection refused Feb 02 13:02:30 crc kubenswrapper[4955]: E0202 13:02:30.923607 4955 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.86:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:02:31 crc kubenswrapper[4955]: E0202 13:02:31.063255 4955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.86:6443: connect: connection refused" interval="1.6s" Feb 02 13:02:31 crc kubenswrapper[4955]: W0202 13:02:31.242418 4955 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.86:6443: connect: connection refused Feb 02 13:02:31 crc kubenswrapper[4955]: E0202 13:02:31.242491 4955 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.86:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.291103 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.292453 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.292509 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.292529 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.292605 4955 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 13:02:31 crc kubenswrapper[4955]: E0202 13:02:31.293127 4955 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.86:6443: connect: connection refused" node="crc" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.657758 4955 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.86:6443: connect: connection refused Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.660171 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 04:45:40.301481974 +0000 UTC Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.720355 4955 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 13:02:31 crc kubenswrapper[4955]: E0202 13:02:31.722126 4955 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.86:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.732122 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a5be602f3228acff6d3c11b288e4e8e5572680422268b7a0a8a952416fd03370"} Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.732166 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.732166 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"960f402caf1caf531e49391c1aa0e9e58a06ac82db00ad44d31c51a4b40319e3"} Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.732265 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8f5e5956114c1979f525d57c7df146bdd9ff455a6532ee2d5528f33a2f47b53a"} Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.732293 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"995a8c5f4d865cd956d49e6e7702944feb1fad045e53ed4e8e23e31c495443c5"} Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.733050 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.733093 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.733112 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.733797 4955 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182" exitCode=0 Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.733855 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182"} Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.733876 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.734858 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.734885 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.734915 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.736078 4955 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="81b633833b3730bfe97d367fe8677e69d729e38888fe2c7b67dbd2b1606467e1" exitCode=0 Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.736153 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"81b633833b3730bfe97d367fe8677e69d729e38888fe2c7b67dbd2b1606467e1"} Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.736173 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.736236 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.737217 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.737247 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.737248 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.737260 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.737270 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.737283 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.737703 4955 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="1895d7edbdecaa563513027fcb9df8e48b89c5e36fdda597fa9f9270da5d0172" exitCode=0 Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.737760 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"1895d7edbdecaa563513027fcb9df8e48b89c5e36fdda597fa9f9270da5d0172"} Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.737836 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.739585 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.739610 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.739622 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.741644 4955 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37" exitCode=0 Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.741694 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37"} Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.741760 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.743794 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.743825 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:31 crc kubenswrapper[4955]: I0202 13:02:31.743836 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:31 crc kubenswrapper[4955]: E0202 13:02:31.789929 4955 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.86:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18906f976952f930 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 13:02:29.654395184 +0000 UTC m=+0.566731674,LastTimestamp:2026-02-02 13:02:29.654395184 +0000 UTC m=+0.566731674,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.657661 4955 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.86:6443: connect: connection refused Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.660870 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 13:25:32.515526891 +0000 UTC Feb 02 13:02:32 crc kubenswrapper[4955]: E0202 13:02:32.664368 4955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.86:6443: connect: connection refused" interval="3.2s" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.747236 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5ff71c5de3ff826477e1c0c96963e333fef98be4b2371067a50f0d19644d4f67"} Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.747285 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d"} Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.747295 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde"} Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.747305 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410"} Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.747313 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69"} Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.747403 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.748168 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.748202 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.748212 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.749750 4955 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="586764d58ad530c9248fe63cc65a27d200479adabd0e73d353985054a9798c8a" exitCode=0 Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.749799 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.749828 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"586764d58ad530c9248fe63cc65a27d200479adabd0e73d353985054a9798c8a"} Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.750309 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.750330 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.750337 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.751877 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e8b3a3306313c83e5b30cad24fd57a77b254ceaa21df4228d109f47cde1aa378"} Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.751960 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.752899 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.752924 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.752934 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.754985 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.755411 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.755691 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f75d7d3d3bf40facfdba3d7ca2a10e9c7df2be89e1b605ab6c70ae252978623f"} Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.755720 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5a731059d0e0d6b7626697d820885c68f344e9194d29cc6fe407b3946dfb2533"} Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.755731 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"81c60e92cee9bd3bccac68de0215ea5cca98cc73f4824943bc418033b72bc4bb"} Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.755940 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.755961 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.755970 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.756673 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.756689 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.756696 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.889503 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.893792 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.894886 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.894918 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.894932 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:32 crc kubenswrapper[4955]: I0202 13:02:32.894954 4955 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 13:02:32 crc kubenswrapper[4955]: E0202 13:02:32.895270 4955 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.86:6443: connect: connection refused" node="crc" Feb 02 13:02:32 crc kubenswrapper[4955]: W0202 13:02:32.922626 4955 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.86:6443: connect: connection refused Feb 02 13:02:32 crc kubenswrapper[4955]: E0202 13:02:32.922738 4955 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.86:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.097087 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.661680 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 07:07:04.675086176 +0000 UTC Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.761255 4955 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c726e7b4639a62eb1769988a3046c6b113c2c3d9eee33ea657539aaf06ebfd7d" exitCode=0 Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.761319 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c726e7b4639a62eb1769988a3046c6b113c2c3d9eee33ea657539aaf06ebfd7d"} Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.761359 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.761430 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.761503 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.761596 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.761628 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.761927 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.763105 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.763173 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.763209 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.763818 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.763962 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.763977 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.764319 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.764363 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.764384 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.767288 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.767332 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.767347 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.767285 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.767434 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:33 crc kubenswrapper[4955]: I0202 13:02:33.767452 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:34 crc kubenswrapper[4955]: I0202 13:02:34.662449 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 04:55:26.702379519 +0000 UTC Feb 02 13:02:34 crc kubenswrapper[4955]: I0202 13:02:34.767275 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8ddd999d11629a86b2948c037a0560c0865171ba7e6ca44ea8bfc6c57a1f40a1"} Feb 02 13:02:34 crc kubenswrapper[4955]: I0202 13:02:34.767322 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"852cf92983990f7b9d8ee62f9ee1757642ed7f01f10d4f1e58626dd053918352"} Feb 02 13:02:34 crc kubenswrapper[4955]: I0202 13:02:34.767336 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d81aacf34084ecf71590ddfa05746a8a44dcf932aa599453ccfc97d87a3c208c"} Feb 02 13:02:34 crc kubenswrapper[4955]: I0202 13:02:34.767352 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:34 crc kubenswrapper[4955]: I0202 13:02:34.767412 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:34 crc kubenswrapper[4955]: I0202 13:02:34.767443 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:34 crc kubenswrapper[4955]: I0202 13:02:34.767347 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"38752ea10ee618d200ad022f1a1c2310db4ebe6e6df323d5c72e93df56a5d976"} Feb 02 13:02:34 crc kubenswrapper[4955]: I0202 13:02:34.768028 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fdea6aaf929fcddb47ebb0271cb7a35d0c9e5ac23590c3ba9a8d14e905f4c8d0"} Feb 02 13:02:34 crc kubenswrapper[4955]: I0202 13:02:34.768635 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:34 crc kubenswrapper[4955]: I0202 13:02:34.768679 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:34 crc kubenswrapper[4955]: I0202 13:02:34.768683 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:34 crc kubenswrapper[4955]: I0202 13:02:34.768737 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:34 crc kubenswrapper[4955]: I0202 13:02:34.768707 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:34 crc kubenswrapper[4955]: I0202 13:02:34.768798 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:34 crc kubenswrapper[4955]: I0202 13:02:34.769511 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:34 crc kubenswrapper[4955]: I0202 13:02:34.769578 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:34 crc kubenswrapper[4955]: I0202 13:02:34.769597 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:35 crc kubenswrapper[4955]: I0202 13:02:35.662737 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 02:13:45.044964349 +0000 UTC Feb 02 13:02:35 crc kubenswrapper[4955]: I0202 13:02:35.745672 4955 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 13:02:35 crc kubenswrapper[4955]: I0202 13:02:35.770711 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:35 crc kubenswrapper[4955]: I0202 13:02:35.772008 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:35 crc kubenswrapper[4955]: I0202 13:02:35.772054 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:35 crc kubenswrapper[4955]: I0202 13:02:35.772071 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:36 crc kubenswrapper[4955]: I0202 13:02:36.096313 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:36 crc kubenswrapper[4955]: I0202 13:02:36.097934 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:36 crc kubenswrapper[4955]: I0202 13:02:36.097981 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:36 crc kubenswrapper[4955]: I0202 13:02:36.097992 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:36 crc kubenswrapper[4955]: I0202 13:02:36.098020 4955 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 13:02:36 crc kubenswrapper[4955]: I0202 13:02:36.341953 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:02:36 crc kubenswrapper[4955]: I0202 13:02:36.342182 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:36 crc kubenswrapper[4955]: I0202 13:02:36.343765 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:36 crc kubenswrapper[4955]: I0202 13:02:36.343803 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:36 crc kubenswrapper[4955]: I0202 13:02:36.343819 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:36 crc kubenswrapper[4955]: I0202 13:02:36.560423 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:02:36 crc kubenswrapper[4955]: I0202 13:02:36.560763 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:36 crc kubenswrapper[4955]: I0202 13:02:36.561715 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:36 crc kubenswrapper[4955]: I0202 13:02:36.561821 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:36 crc kubenswrapper[4955]: I0202 13:02:36.561935 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:36 crc kubenswrapper[4955]: I0202 13:02:36.663708 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 18:35:42.097874902 +0000 UTC Feb 02 13:02:36 crc kubenswrapper[4955]: I0202 13:02:36.954114 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:02:36 crc kubenswrapper[4955]: I0202 13:02:36.954439 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:36 crc kubenswrapper[4955]: I0202 13:02:36.959669 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:36 crc kubenswrapper[4955]: I0202 13:02:36.960106 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:36 crc kubenswrapper[4955]: I0202 13:02:36.960261 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:37 crc kubenswrapper[4955]: I0202 13:02:37.003225 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:02:37 crc kubenswrapper[4955]: I0202 13:02:37.003389 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:37 crc kubenswrapper[4955]: I0202 13:02:37.004482 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:37 crc kubenswrapper[4955]: I0202 13:02:37.004523 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:37 crc kubenswrapper[4955]: I0202 13:02:37.004543 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:37 crc kubenswrapper[4955]: I0202 13:02:37.013869 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:02:37 crc kubenswrapper[4955]: I0202 13:02:37.515306 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:02:37 crc kubenswrapper[4955]: I0202 13:02:37.555647 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 02 13:02:37 crc kubenswrapper[4955]: I0202 13:02:37.555997 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:37 crc kubenswrapper[4955]: I0202 13:02:37.557776 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:37 crc kubenswrapper[4955]: I0202 13:02:37.557829 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:37 crc kubenswrapper[4955]: I0202 13:02:37.557842 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:37 crc kubenswrapper[4955]: I0202 13:02:37.665183 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 13:21:49.105761139 +0000 UTC Feb 02 13:02:37 crc kubenswrapper[4955]: I0202 13:02:37.776114 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:37 crc kubenswrapper[4955]: I0202 13:02:37.776875 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:37 crc kubenswrapper[4955]: I0202 13:02:37.776904 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:37 crc kubenswrapper[4955]: I0202 13:02:37.776914 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:37 crc kubenswrapper[4955]: I0202 13:02:37.854698 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 02 13:02:37 crc kubenswrapper[4955]: I0202 13:02:37.854931 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:37 crc kubenswrapper[4955]: I0202 13:02:37.856270 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:37 crc kubenswrapper[4955]: I0202 13:02:37.856335 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:37 crc kubenswrapper[4955]: I0202 13:02:37.856361 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:38 crc kubenswrapper[4955]: I0202 13:02:38.665615 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 18:24:35.994152214 +0000 UTC Feb 02 13:02:38 crc kubenswrapper[4955]: I0202 13:02:38.779013 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:38 crc kubenswrapper[4955]: I0202 13:02:38.780235 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:38 crc kubenswrapper[4955]: I0202 13:02:38.780288 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:38 crc kubenswrapper[4955]: I0202 13:02:38.780309 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:39 crc kubenswrapper[4955]: I0202 13:02:39.560509 4955 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 13:02:39 crc kubenswrapper[4955]: I0202 13:02:39.560677 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 13:02:39 crc kubenswrapper[4955]: I0202 13:02:39.666619 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 16:08:53.501918108 +0000 UTC Feb 02 13:02:39 crc kubenswrapper[4955]: E0202 13:02:39.790063 4955 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 13:02:40 crc kubenswrapper[4955]: I0202 13:02:40.667163 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 05:14:36.799696595 +0000 UTC Feb 02 13:02:41 crc kubenswrapper[4955]: I0202 13:02:41.668007 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 00:13:21.170451215 +0000 UTC Feb 02 13:02:42 crc kubenswrapper[4955]: I0202 13:02:42.669128 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 00:06:39.787059963 +0000 UTC Feb 02 13:02:43 crc kubenswrapper[4955]: I0202 13:02:43.351806 4955 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49588->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 02 13:02:43 crc kubenswrapper[4955]: I0202 13:02:43.351911 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49588->192.168.126.11:17697: read: connection reset by peer" Feb 02 13:02:43 crc kubenswrapper[4955]: W0202 13:02:43.396539 4955 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 02 13:02:43 crc kubenswrapper[4955]: I0202 13:02:43.396669 4955 trace.go:236] Trace[1802652402]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 13:02:33.395) (total time: 10001ms): Feb 02 13:02:43 crc kubenswrapper[4955]: Trace[1802652402]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:02:43.396) Feb 02 13:02:43 crc kubenswrapper[4955]: Trace[1802652402]: [10.001262849s] [10.001262849s] END Feb 02 13:02:43 crc kubenswrapper[4955]: E0202 13:02:43.396696 4955 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 02 13:02:43 crc kubenswrapper[4955]: W0202 13:02:43.429309 4955 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 02 13:02:43 crc kubenswrapper[4955]: I0202 13:02:43.429414 4955 trace.go:236] Trace[632075815]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 13:02:33.428) (total time: 10001ms): Feb 02 13:02:43 crc kubenswrapper[4955]: Trace[632075815]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:02:43.429) Feb 02 13:02:43 crc kubenswrapper[4955]: Trace[632075815]: [10.001150347s] [10.001150347s] END Feb 02 13:02:43 crc kubenswrapper[4955]: E0202 13:02:43.429436 4955 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 02 13:02:43 crc kubenswrapper[4955]: W0202 13:02:43.476579 4955 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 02 13:02:43 crc kubenswrapper[4955]: I0202 13:02:43.476669 4955 trace.go:236] Trace[573673082]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 13:02:33.474) (total time: 10002ms): Feb 02 13:02:43 crc kubenswrapper[4955]: Trace[573673082]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (13:02:43.476) Feb 02 13:02:43 crc kubenswrapper[4955]: Trace[573673082]: [10.002349283s] [10.002349283s] END Feb 02 13:02:43 crc kubenswrapper[4955]: E0202 13:02:43.476689 4955 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 02 13:02:43 crc kubenswrapper[4955]: I0202 13:02:43.520060 4955 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 13:02:43 crc kubenswrapper[4955]: I0202 13:02:43.520133 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 13:02:43 crc kubenswrapper[4955]: I0202 13:02:43.527161 4955 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 13:02:43 crc kubenswrapper[4955]: I0202 13:02:43.527271 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 13:02:43 crc kubenswrapper[4955]: I0202 13:02:43.669269 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 18:34:05.88428106 +0000 UTC Feb 02 13:02:43 crc kubenswrapper[4955]: I0202 13:02:43.792349 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 13:02:43 crc kubenswrapper[4955]: I0202 13:02:43.794370 4955 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5ff71c5de3ff826477e1c0c96963e333fef98be4b2371067a50f0d19644d4f67" exitCode=255 Feb 02 13:02:43 crc kubenswrapper[4955]: I0202 13:02:43.794406 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5ff71c5de3ff826477e1c0c96963e333fef98be4b2371067a50f0d19644d4f67"} Feb 02 13:02:43 crc kubenswrapper[4955]: I0202 13:02:43.794515 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:43 crc kubenswrapper[4955]: I0202 13:02:43.795463 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:43 crc kubenswrapper[4955]: I0202 13:02:43.795528 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:43 crc kubenswrapper[4955]: I0202 13:02:43.795546 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:43 crc kubenswrapper[4955]: I0202 13:02:43.796377 4955 scope.go:117] "RemoveContainer" containerID="5ff71c5de3ff826477e1c0c96963e333fef98be4b2371067a50f0d19644d4f67" Feb 02 13:02:44 crc kubenswrapper[4955]: I0202 13:02:44.670026 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 23:26:23.538524639 +0000 UTC Feb 02 13:02:44 crc kubenswrapper[4955]: I0202 13:02:44.798389 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 13:02:44 crc kubenswrapper[4955]: I0202 13:02:44.800892 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a"} Feb 02 13:02:44 crc kubenswrapper[4955]: I0202 13:02:44.801188 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:44 crc kubenswrapper[4955]: I0202 13:02:44.802494 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:44 crc kubenswrapper[4955]: I0202 13:02:44.802546 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:44 crc kubenswrapper[4955]: I0202 13:02:44.802584 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:45 crc kubenswrapper[4955]: I0202 13:02:45.670892 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 22:41:22.772773349 +0000 UTC Feb 02 13:02:46 crc kubenswrapper[4955]: I0202 13:02:46.671334 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 11:18:40.626348048 +0000 UTC Feb 02 13:02:46 crc kubenswrapper[4955]: I0202 13:02:46.718982 4955 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 13:02:46 crc kubenswrapper[4955]: I0202 13:02:46.962189 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:02:46 crc kubenswrapper[4955]: I0202 13:02:46.962400 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:46 crc kubenswrapper[4955]: I0202 13:02:46.962867 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:02:46 crc kubenswrapper[4955]: I0202 13:02:46.964124 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:46 crc kubenswrapper[4955]: I0202 13:02:46.964188 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:46 crc kubenswrapper[4955]: I0202 13:02:46.964201 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:46 crc kubenswrapper[4955]: I0202 13:02:46.968024 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:02:47 crc kubenswrapper[4955]: I0202 13:02:47.077053 4955 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 13:02:47 crc kubenswrapper[4955]: I0202 13:02:47.525651 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:02:47 crc kubenswrapper[4955]: I0202 13:02:47.525825 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:47 crc kubenswrapper[4955]: I0202 13:02:47.527041 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:47 crc kubenswrapper[4955]: I0202 13:02:47.527068 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:47 crc kubenswrapper[4955]: I0202 13:02:47.527080 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:47 crc kubenswrapper[4955]: I0202 13:02:47.671528 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 00:05:50.796872171 +0000 UTC Feb 02 13:02:47 crc kubenswrapper[4955]: I0202 13:02:47.810452 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:47 crc kubenswrapper[4955]: I0202 13:02:47.811860 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:47 crc kubenswrapper[4955]: I0202 13:02:47.811907 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:47 crc kubenswrapper[4955]: I0202 13:02:47.811926 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:47 crc kubenswrapper[4955]: I0202 13:02:47.902470 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 02 13:02:47 crc kubenswrapper[4955]: I0202 13:02:47.902690 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:47 crc kubenswrapper[4955]: I0202 13:02:47.903810 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:47 crc kubenswrapper[4955]: I0202 13:02:47.903845 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:47 crc kubenswrapper[4955]: I0202 13:02:47.903856 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:47 crc kubenswrapper[4955]: I0202 13:02:47.924116 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 02 13:02:48 crc kubenswrapper[4955]: E0202 13:02:48.516753 4955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 02 13:02:48 crc kubenswrapper[4955]: E0202 13:02:48.521163 4955 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 02 13:02:48 crc kubenswrapper[4955]: I0202 13:02:48.521988 4955 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 02 13:02:48 crc kubenswrapper[4955]: I0202 13:02:48.522262 4955 trace.go:236] Trace[852526761]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 13:02:38.334) (total time: 10187ms): Feb 02 13:02:48 crc kubenswrapper[4955]: Trace[852526761]: ---"Objects listed" error: 10187ms (13:02:48.522) Feb 02 13:02:48 crc kubenswrapper[4955]: Trace[852526761]: [10.187444991s] [10.187444991s] END Feb 02 13:02:48 crc kubenswrapper[4955]: I0202 13:02:48.522301 4955 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 13:02:48 crc kubenswrapper[4955]: I0202 13:02:48.543350 4955 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 13:02:48 crc kubenswrapper[4955]: I0202 13:02:48.581658 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:02:48 crc kubenswrapper[4955]: I0202 13:02:48.581769 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:48 crc kubenswrapper[4955]: I0202 13:02:48.582927 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:48 crc kubenswrapper[4955]: I0202 13:02:48.582990 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:48 crc kubenswrapper[4955]: I0202 13:02:48.583008 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:48 crc kubenswrapper[4955]: I0202 13:02:48.586691 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:02:48 crc kubenswrapper[4955]: I0202 13:02:48.672302 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 18:53:14.376789601 +0000 UTC Feb 02 13:02:48 crc kubenswrapper[4955]: I0202 13:02:48.812441 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:48 crc kubenswrapper[4955]: I0202 13:02:48.812478 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:48 crc kubenswrapper[4955]: I0202 13:02:48.812548 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:48 crc kubenswrapper[4955]: I0202 13:02:48.813861 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:48 crc kubenswrapper[4955]: I0202 13:02:48.813911 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:48 crc kubenswrapper[4955]: I0202 13:02:48.813920 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:48 crc kubenswrapper[4955]: I0202 13:02:48.813933 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:48 crc kubenswrapper[4955]: I0202 13:02:48.813955 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:48 crc kubenswrapper[4955]: I0202 13:02:48.813981 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:48 crc kubenswrapper[4955]: I0202 13:02:48.814024 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:48 crc kubenswrapper[4955]: I0202 13:02:48.814039 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:48 crc kubenswrapper[4955]: I0202 13:02:48.814048 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.042781 4955 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.655426 4955 apiserver.go:52] "Watching apiserver" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.660875 4955 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.661228 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.661635 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.661690 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:49 crc kubenswrapper[4955]: E0202 13:02:49.661742 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.662082 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.662156 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.662094 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.662209 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:49 crc kubenswrapper[4955]: E0202 13:02:49.662246 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:49 crc kubenswrapper[4955]: E0202 13:02:49.662409 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.664184 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.664306 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.664745 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.664896 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.665064 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.665832 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.665916 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.666419 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.667058 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.672687 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 19:06:41.002476299 +0000 UTC Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.694356 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.708734 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.719845 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.731445 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.742637 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.753854 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.761086 4955 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.764970 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.778248 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.790758 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.803263 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.812300 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.816329 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.817012 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.819041 4955 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a" exitCode=255 Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.819110 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a"} Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.819197 4955 scope.go:117] "RemoveContainer" containerID="5ff71c5de3ff826477e1c0c96963e333fef98be4b2371067a50f0d19644d4f67" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.826874 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.826972 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.827010 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.827034 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.827062 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.827096 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.827121 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.827150 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.827574 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.827588 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.827617 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.827716 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.827766 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.827788 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.827904 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.828076 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.828143 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.828176 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.828280 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.828245 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.828449 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.828691 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.828736 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.828757 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.828804 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.828795 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.828815 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.829198 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.828790 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.829508 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.829594 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.829637 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.829641 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.829709 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.829745 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.829771 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.829975 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.829795 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830089 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830106 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830134 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830163 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830192 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830214 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830236 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830252 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830270 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830286 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830303 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830322 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830346 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830371 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830399 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830429 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830455 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830489 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830516 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830538 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830578 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830594 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830615 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830634 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830653 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830670 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830686 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830705 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830725 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830744 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830760 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830776 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830795 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830813 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830830 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830846 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830343 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830377 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830361 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830436 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830469 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830863 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831031 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831082 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831124 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831163 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831200 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831235 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831269 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831299 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831334 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831370 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831399 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831440 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831523 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831637 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831680 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831713 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831752 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831786 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831819 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831851 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831884 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831916 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831950 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831984 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832025 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832060 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832091 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832127 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832161 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832193 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832226 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832264 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832302 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832337 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832375 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832413 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832447 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832486 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832526 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832593 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832629 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832663 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832700 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832735 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832768 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832800 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832833 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832895 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832934 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830465 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832996 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.833006 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830588 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830673 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.833028 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830851 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831641 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831656 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831749 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.831757 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832017 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832148 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832358 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.833205 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.833292 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832474 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832497 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.833331 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832680 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832770 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832795 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832903 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.830676 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.833490 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.834038 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.834080 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.834231 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.835286 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.835251 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.835483 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.835580 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.835538 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.835612 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.835680 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.835896 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.835899 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.836163 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.836188 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.836235 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.836284 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.836307 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.836626 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.836693 4955 scope.go:117] "RemoveContainer" containerID="b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a" Feb 02 13:02:49 crc kubenswrapper[4955]: E0202 13:02:49.836873 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.836699 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.836737 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.836834 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.836983 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.837114 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.837373 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.837402 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.837390 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.837427 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.837441 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.837732 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.837912 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.837965 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.838050 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.838062 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.838147 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.838201 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.838308 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.838387 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.838394 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.838467 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.838547 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.838987 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.839143 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.839542 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.839623 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.839667 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.832969 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.839820 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.839836 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.839863 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.839904 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.839940 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.839977 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.839993 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840016 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840048 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840056 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840096 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840134 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840177 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840266 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840282 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840304 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840308 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840342 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840384 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840440 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840492 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840534 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840632 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840671 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840706 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840741 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840775 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840794 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840812 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840851 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840886 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840919 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.840927 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841014 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841108 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841147 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841174 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841199 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841236 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841288 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841316 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841342 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841330 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841372 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841402 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841428 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841457 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841486 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841516 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841541 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841594 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841620 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841644 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841668 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841692 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841718 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841747 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841776 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841800 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841826 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841854 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841878 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841901 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841964 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841989 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842017 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842038 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842061 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842084 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842104 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842125 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842145 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842166 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842187 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842207 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842231 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842256 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842280 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842366 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842389 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842413 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842438 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842466 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842495 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842520 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842543 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842588 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842615 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842646 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842673 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842696 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842723 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842749 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842773 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842791 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842809 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842865 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842887 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842907 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842924 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842945 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842967 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842983 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843000 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843020 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843037 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843059 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843078 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843096 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843116 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843197 4955 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843210 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843221 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843232 4955 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843244 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843255 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843265 4955 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843274 4955 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843284 4955 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843293 4955 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843307 4955 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843317 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843327 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843337 4955 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843347 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843358 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843367 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843377 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843386 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843395 4955 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843405 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843414 4955 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843424 4955 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843434 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843444 4955 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843454 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843463 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843472 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843481 4955 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843490 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843499 4955 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843508 4955 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843517 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843526 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843535 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843544 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843574 4955 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843586 4955 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843597 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843609 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843621 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843636 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843649 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843658 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843668 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843679 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843690 4955 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843707 4955 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843721 4955 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843734 4955 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843747 4955 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843758 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843768 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843777 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843788 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843798 4955 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843808 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843817 4955 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843829 4955 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843839 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843848 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843857 4955 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843866 4955 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843875 4955 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843885 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843893 4955 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843902 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843912 4955 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843921 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843931 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843940 4955 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843948 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843957 4955 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843966 4955 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843976 4955 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843986 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843996 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844005 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844014 4955 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844024 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844033 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844042 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844052 4955 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844060 4955 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844069 4955 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844078 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844087 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844096 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844105 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844114 4955 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844123 4955 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844133 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844143 4955 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844152 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844162 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844171 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844182 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844194 4955 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.847493 4955 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.851730 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.852650 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.855912 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.857746 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.858960 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841197 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841426 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841474 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841704 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.859507 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841774 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841766 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.841970 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842471 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842598 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842858 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843233 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.842712 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843747 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843774 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.843870 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844402 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844426 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844440 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844516 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.844616 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.845438 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: E0202 13:02:49.845600 4955 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.845704 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.846339 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.847952 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.847956 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.848173 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.848403 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.848749 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.848782 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.849001 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.849319 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.849399 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.850041 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.850444 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.850571 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.850631 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.850791 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.851078 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.851138 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.851315 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: E0202 13:02:49.851321 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:02:50.351207528 +0000 UTC m=+21.263543988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.851327 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.851422 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.851550 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.851643 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.851944 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.852180 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.852602 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.853029 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.853047 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.853091 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.855190 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.855854 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.855885 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.856285 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.856497 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.857928 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.857995 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.858248 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.858455 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: E0202 13:02:49.853114 4955 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.858905 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.858963 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.859182 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.859263 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.859470 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.861185 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.861221 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.861812 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.862800 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.862845 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.862939 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.863120 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.863603 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.863952 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.864111 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.864121 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.864634 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.864667 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.864755 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.864736 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.864812 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: E0202 13:02:49.864896 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:02:50.364862567 +0000 UTC m=+21.277199027 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.864994 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 13:02:49 crc kubenswrapper[4955]: E0202 13:02:49.865112 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:02:50.365100232 +0000 UTC m=+21.277436682 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.865229 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.867156 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.867546 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.867889 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.869040 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.869268 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.869901 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.870194 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.870586 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.870841 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.871048 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.871277 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.871619 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.871734 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.874004 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 13:02:49 crc kubenswrapper[4955]: E0202 13:02:49.879549 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:02:49 crc kubenswrapper[4955]: E0202 13:02:49.879601 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:02:49 crc kubenswrapper[4955]: E0202 13:02:49.879618 4955 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:49 crc kubenswrapper[4955]: E0202 13:02:49.879707 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 13:02:50.37967813 +0000 UTC m=+21.292014580 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.879731 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: E0202 13:02:49.880188 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:02:49 crc kubenswrapper[4955]: E0202 13:02:49.880221 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:02:49 crc kubenswrapper[4955]: E0202 13:02:49.880242 4955 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:49 crc kubenswrapper[4955]: E0202 13:02:49.880309 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 13:02:50.380287284 +0000 UTC m=+21.292623964 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.884953 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.890966 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.892951 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.897600 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.898982 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.909318 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.920164 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.928823 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.938435 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff71c5de3ff826477e1c0c96963e333fef98be4b2371067a50f0d19644d4f67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:43Z\\\",\\\"message\\\":\\\"W0202 13:02:32.772894 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 13:02:32.773268 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770037352 cert, and key in /tmp/serving-cert-26770074/serving-signer.crt, /tmp/serving-cert-26770074/serving-signer.key\\\\nI0202 13:02:33.026706 1 observer_polling.go:159] Starting file observer\\\\nW0202 13:02:33.030255 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:02:33.030386 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:33.032547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-26770074/tls.crt::/tmp/serving-cert-26770074/tls.key\\\\\\\"\\\\nF0202 13:02:43.345910 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.944784 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.944840 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.944930 4955 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.944954 4955 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.944967 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.944980 4955 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.944993 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945004 4955 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945015 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945027 4955 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945040 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945055 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945067 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945079 4955 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945091 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945102 4955 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945117 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945130 4955 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945141 4955 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945153 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945167 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945180 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945190 4955 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945202 4955 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945214 4955 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945225 4955 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945236 4955 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945248 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945259 4955 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945270 4955 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945281 4955 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945292 4955 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945303 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945314 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945327 4955 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945337 4955 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945350 4955 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945361 4955 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945372 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945383 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945395 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945406 4955 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945418 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945431 4955 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945443 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945457 4955 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945470 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945485 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945497 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945508 4955 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945521 4955 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945533 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945544 4955 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945583 4955 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945595 4955 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945607 4955 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945620 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945630 4955 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945629 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945641 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945670 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945684 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945719 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945746 4955 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945757 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945765 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945774 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945783 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945792 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945801 4955 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945823 4955 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945831 4955 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945839 4955 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945848 4955 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945857 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945866 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945875 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945898 4955 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945906 4955 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945915 4955 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945923 4955 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945932 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945940 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945949 4955 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945957 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.945981 4955 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.946011 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.946021 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.946031 4955 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.946058 4955 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.946067 4955 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.946076 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.946085 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.946093 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.946101 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.946110 4955 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.946133 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.946142 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.946151 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.946163 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.975737 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.983860 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 13:02:49 crc kubenswrapper[4955]: W0202 13:02:49.987916 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-6fa296ab0c5d2f77a16556159065ec224557fba172a35471ac36777518c7d829 WatchSource:0}: Error finding container 6fa296ab0c5d2f77a16556159065ec224557fba172a35471ac36777518c7d829: Status 404 returned error can't find the container with id 6fa296ab0c5d2f77a16556159065ec224557fba172a35471ac36777518c7d829 Feb 02 13:02:49 crc kubenswrapper[4955]: I0202 13:02:49.989190 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 13:02:49 crc kubenswrapper[4955]: W0202 13:02:49.996586 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-195c89ecc7a33e948d4381b06587af2ceae4c24bbfd9e6b29e8b3ff3a1e9c009 WatchSource:0}: Error finding container 195c89ecc7a33e948d4381b06587af2ceae4c24bbfd9e6b29e8b3ff3a1e9c009: Status 404 returned error can't find the container with id 195c89ecc7a33e948d4381b06587af2ceae4c24bbfd9e6b29e8b3ff3a1e9c009 Feb 02 13:02:50 crc kubenswrapper[4955]: W0202 13:02:49.999543 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-7acb16105503d6b981352f8c00ab23f0663659131b69ebeefce5d992491d8a4e WatchSource:0}: Error finding container 7acb16105503d6b981352f8c00ab23f0663659131b69ebeefce5d992491d8a4e: Status 404 returned error can't find the container with id 7acb16105503d6b981352f8c00ab23f0663659131b69ebeefce5d992491d8a4e Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.450116 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.450199 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.450232 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.450289 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.450313 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:50 crc kubenswrapper[4955]: E0202 13:02:50.450446 4955 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:02:50 crc kubenswrapper[4955]: E0202 13:02:50.450512 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:02:51.450491763 +0000 UTC m=+22.362828233 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:02:50 crc kubenswrapper[4955]: E0202 13:02:50.450781 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:02:50 crc kubenswrapper[4955]: E0202 13:02:50.450790 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:02:50 crc kubenswrapper[4955]: E0202 13:02:50.450816 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:02:50 crc kubenswrapper[4955]: E0202 13:02:50.450834 4955 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:50 crc kubenswrapper[4955]: E0202 13:02:50.450858 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:02:51.45084493 +0000 UTC m=+22.363181380 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:02:50 crc kubenswrapper[4955]: E0202 13:02:50.450800 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:02:50 crc kubenswrapper[4955]: E0202 13:02:50.450879 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 13:02:51.450871511 +0000 UTC m=+22.363207961 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:50 crc kubenswrapper[4955]: E0202 13:02:50.450883 4955 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:50 crc kubenswrapper[4955]: E0202 13:02:50.450904 4955 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:02:50 crc kubenswrapper[4955]: E0202 13:02:50.450921 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 13:02:51.450906852 +0000 UTC m=+22.363243322 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:50 crc kubenswrapper[4955]: E0202 13:02:50.450964 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:02:51.450949353 +0000 UTC m=+22.363285813 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.673825 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 09:43:09.169152314 +0000 UTC Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.716387 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:50 crc kubenswrapper[4955]: E0202 13:02:50.716683 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.825462 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82"} Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.825537 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6fa296ab0c5d2f77a16556159065ec224557fba172a35471ac36777518c7d829"} Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.827809 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.831618 4955 scope.go:117] "RemoveContainer" containerID="b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a" Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.831696 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7acb16105503d6b981352f8c00ab23f0663659131b69ebeefce5d992491d8a4e"} Feb 02 13:02:50 crc kubenswrapper[4955]: E0202 13:02:50.831913 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.834847 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b"} Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.834897 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d"} Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.834923 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"195c89ecc7a33e948d4381b06587af2ceae4c24bbfd9e6b29e8b3ff3a1e9c009"} Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.845351 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.861217 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.877066 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff71c5de3ff826477e1c0c96963e333fef98be4b2371067a50f0d19644d4f67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:43Z\\\",\\\"message\\\":\\\"W0202 13:02:32.772894 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 13:02:32.773268 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770037352 cert, and key in /tmp/serving-cert-26770074/serving-signer.crt, /tmp/serving-cert-26770074/serving-signer.key\\\\nI0202 13:02:33.026706 1 observer_polling.go:159] Starting file observer\\\\nW0202 13:02:33.030255 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:02:33.030386 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:33.032547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-26770074/tls.crt::/tmp/serving-cert-26770074/tls.key\\\\\\\"\\\\nF0202 13:02:43.345910 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.897101 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.914240 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.932894 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.947696 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.960961 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.976833 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:50 crc kubenswrapper[4955]: I0202 13:02:50.990257 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.003277 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:51Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.016725 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:51Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.032762 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:51Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.049296 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:51Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.468969 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.469031 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.469052 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.469070 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.469090 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:51 crc kubenswrapper[4955]: E0202 13:02:51.469157 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:02:53.469125878 +0000 UTC m=+24.381462328 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:02:51 crc kubenswrapper[4955]: E0202 13:02:51.469194 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:02:51 crc kubenswrapper[4955]: E0202 13:02:51.469209 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:02:51 crc kubenswrapper[4955]: E0202 13:02:51.469220 4955 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:51 crc kubenswrapper[4955]: E0202 13:02:51.469229 4955 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:02:51 crc kubenswrapper[4955]: E0202 13:02:51.469269 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 13:02:53.469256712 +0000 UTC m=+24.381593162 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:51 crc kubenswrapper[4955]: E0202 13:02:51.469297 4955 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:02:51 crc kubenswrapper[4955]: E0202 13:02:51.469316 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:02:51 crc kubenswrapper[4955]: E0202 13:02:51.469374 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:02:51 crc kubenswrapper[4955]: E0202 13:02:51.469395 4955 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:51 crc kubenswrapper[4955]: E0202 13:02:51.469327 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:02:53.469307553 +0000 UTC m=+24.381644003 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:02:51 crc kubenswrapper[4955]: E0202 13:02:51.469481 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 13:02:53.469452966 +0000 UTC m=+24.381789416 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:51 crc kubenswrapper[4955]: E0202 13:02:51.469504 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:02:53.469495787 +0000 UTC m=+24.381832237 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.674724 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 19:31:34.240744628 +0000 UTC Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.716218 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.716246 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:51 crc kubenswrapper[4955]: E0202 13:02:51.716386 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:51 crc kubenswrapper[4955]: E0202 13:02:51.716748 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.719693 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.720217 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.720989 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.721620 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.722194 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.722687 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.723234 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.723749 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.724357 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.726076 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.726576 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.727381 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.727903 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.728410 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.728935 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.729428 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.730111 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.730604 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.731179 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.731793 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.732317 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.732919 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.733369 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.734073 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.734541 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.735233 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.735923 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.739455 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.740063 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.740981 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.741473 4955 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.741625 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.743823 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.744423 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.744880 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.746340 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.748050 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.748548 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.749604 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.750283 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.751202 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.751865 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.753964 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.755008 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.756284 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.756978 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.765958 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.767060 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.768358 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.768888 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.769686 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.770223 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.770787 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 02 13:02:51 crc kubenswrapper[4955]: I0202 13:02:51.771654 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 02 13:02:52 crc kubenswrapper[4955]: I0202 13:02:52.675515 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 12:40:42.161253475 +0000 UTC Feb 02 13:02:52 crc kubenswrapper[4955]: I0202 13:02:52.716319 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:52 crc kubenswrapper[4955]: E0202 13:02:52.716540 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:52 crc kubenswrapper[4955]: I0202 13:02:52.841974 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6"} Feb 02 13:02:52 crc kubenswrapper[4955]: I0202 13:02:52.858771 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:52 crc kubenswrapper[4955]: I0202 13:02:52.878855 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:52 crc kubenswrapper[4955]: I0202 13:02:52.900463 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:52 crc kubenswrapper[4955]: I0202 13:02:52.920722 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:52 crc kubenswrapper[4955]: I0202 13:02:52.938121 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:52 crc kubenswrapper[4955]: I0202 13:02:52.954132 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:52 crc kubenswrapper[4955]: I0202 13:02:52.965095 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:53 crc kubenswrapper[4955]: I0202 13:02:53.518675 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:02:53 crc kubenswrapper[4955]: I0202 13:02:53.518764 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:53 crc kubenswrapper[4955]: I0202 13:02:53.518803 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:53 crc kubenswrapper[4955]: I0202 13:02:53.518834 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:53 crc kubenswrapper[4955]: E0202 13:02:53.518908 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:02:53 crc kubenswrapper[4955]: E0202 13:02:53.518937 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:02:53 crc kubenswrapper[4955]: E0202 13:02:53.518950 4955 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:53 crc kubenswrapper[4955]: E0202 13:02:53.519010 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:02:57.518982435 +0000 UTC m=+28.431318905 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:02:53 crc kubenswrapper[4955]: E0202 13:02:53.519048 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 13:02:57.519038256 +0000 UTC m=+28.431374796 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:53 crc kubenswrapper[4955]: E0202 13:02:53.519076 4955 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:02:53 crc kubenswrapper[4955]: I0202 13:02:53.519085 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:53 crc kubenswrapper[4955]: E0202 13:02:53.519124 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:02:57.519107378 +0000 UTC m=+28.431443908 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:02:53 crc kubenswrapper[4955]: E0202 13:02:53.519113 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:02:53 crc kubenswrapper[4955]: E0202 13:02:53.519201 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:02:53 crc kubenswrapper[4955]: E0202 13:02:53.519219 4955 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:02:53 crc kubenswrapper[4955]: E0202 13:02:53.519272 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:02:57.519259362 +0000 UTC m=+28.431595902 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:02:53 crc kubenswrapper[4955]: E0202 13:02:53.519220 4955 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:53 crc kubenswrapper[4955]: E0202 13:02:53.519328 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 13:02:57.519320623 +0000 UTC m=+28.431657173 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:53 crc kubenswrapper[4955]: I0202 13:02:53.676655 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 09:43:21.129409035 +0000 UTC Feb 02 13:02:53 crc kubenswrapper[4955]: I0202 13:02:53.716146 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:53 crc kubenswrapper[4955]: E0202 13:02:53.716391 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:53 crc kubenswrapper[4955]: I0202 13:02:53.716539 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:53 crc kubenswrapper[4955]: E0202 13:02:53.716900 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.677146 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 01:29:01.824759412 +0000 UTC Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.715627 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:54 crc kubenswrapper[4955]: E0202 13:02:54.715881 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.921480 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.923252 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.923292 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.923301 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.923356 4955 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.929387 4955 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.929609 4955 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.931464 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.931513 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.931525 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.932916 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.932967 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:54Z","lastTransitionTime":"2026-02-02T13:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:54 crc kubenswrapper[4955]: E0202 13:02:54.956252 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.960445 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.960480 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.960491 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.960508 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.960522 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:54Z","lastTransitionTime":"2026-02-02T13:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:54 crc kubenswrapper[4955]: E0202 13:02:54.980414 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.984800 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.984837 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.984854 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.984873 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:54 crc kubenswrapper[4955]: I0202 13:02:54.984888 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:54Z","lastTransitionTime":"2026-02-02T13:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:55 crc kubenswrapper[4955]: E0202 13:02:55.000750 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.004078 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.004109 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.004120 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.004138 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.004147 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:55Z","lastTransitionTime":"2026-02-02T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:55 crc kubenswrapper[4955]: E0202 13:02:55.014440 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:55Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.017573 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.017880 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.017902 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.017919 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.017932 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:55Z","lastTransitionTime":"2026-02-02T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:55 crc kubenswrapper[4955]: E0202 13:02:55.032958 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:55Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:55 crc kubenswrapper[4955]: E0202 13:02:55.033076 4955 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.034426 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.034461 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.034478 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.034501 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.034520 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:55Z","lastTransitionTime":"2026-02-02T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.136663 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.136720 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.136732 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.136745 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.136754 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:55Z","lastTransitionTime":"2026-02-02T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.240042 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.240084 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.240094 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.240108 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.240118 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:55Z","lastTransitionTime":"2026-02-02T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.342174 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.342225 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.342241 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.342275 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.342292 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:55Z","lastTransitionTime":"2026-02-02T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.409657 4955 csr.go:261] certificate signing request csr-shkdm is approved, waiting to be issued Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.444244 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.444272 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.444282 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.444295 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.444303 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:55Z","lastTransitionTime":"2026-02-02T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.448288 4955 csr.go:257] certificate signing request csr-shkdm is issued Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.535131 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-dxh2p"] Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.535446 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dxh2p" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.537686 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-crzll"] Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.537866 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-crzll" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.539491 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.539708 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.539782 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.540519 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.540693 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.542223 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.542379 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.547499 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.547541 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.547573 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.547591 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.547603 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:55Z","lastTransitionTime":"2026-02-02T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.556504 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:55Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.579701 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:55Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.598692 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:55Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.610358 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:55Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.627798 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:55Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.634703 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ff5509a-2943-4526-8bcc-900dca52a6b0-host\") pod \"node-ca-crzll\" (UID: \"9ff5509a-2943-4526-8bcc-900dca52a6b0\") " pod="openshift-image-registry/node-ca-crzll" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.634746 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a985401e-d37f-4c38-a506-93c3f3ccd986-hosts-file\") pod \"node-resolver-dxh2p\" (UID: \"a985401e-d37f-4c38-a506-93c3f3ccd986\") " pod="openshift-dns/node-resolver-dxh2p" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.634857 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9ff5509a-2943-4526-8bcc-900dca52a6b0-serviceca\") pod \"node-ca-crzll\" (UID: \"9ff5509a-2943-4526-8bcc-900dca52a6b0\") " pod="openshift-image-registry/node-ca-crzll" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.634930 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j42lc\" (UniqueName: \"kubernetes.io/projected/a985401e-d37f-4c38-a506-93c3f3ccd986-kube-api-access-j42lc\") pod \"node-resolver-dxh2p\" (UID: \"a985401e-d37f-4c38-a506-93c3f3ccd986\") " pod="openshift-dns/node-resolver-dxh2p" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.634970 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79hdn\" (UniqueName: \"kubernetes.io/projected/9ff5509a-2943-4526-8bcc-900dca52a6b0-kube-api-access-79hdn\") pod \"node-ca-crzll\" (UID: \"9ff5509a-2943-4526-8bcc-900dca52a6b0\") " pod="openshift-image-registry/node-ca-crzll" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.641896 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:55Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.649371 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.649416 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.649426 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.649440 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.649450 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:55Z","lastTransitionTime":"2026-02-02T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.660065 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:55Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.671315 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:55Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.677992 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 09:24:13.614710992 +0000 UTC Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.684398 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:55Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.697696 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:55Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.712477 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:55Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.715470 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.715609 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:55 crc kubenswrapper[4955]: E0202 13:02:55.715612 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:55 crc kubenswrapper[4955]: E0202 13:02:55.715923 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.733858 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:55Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.736151 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a985401e-d37f-4c38-a506-93c3f3ccd986-hosts-file\") pod \"node-resolver-dxh2p\" (UID: \"a985401e-d37f-4c38-a506-93c3f3ccd986\") " pod="openshift-dns/node-resolver-dxh2p" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.736296 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a985401e-d37f-4c38-a506-93c3f3ccd986-hosts-file\") pod \"node-resolver-dxh2p\" (UID: \"a985401e-d37f-4c38-a506-93c3f3ccd986\") " pod="openshift-dns/node-resolver-dxh2p" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.736432 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9ff5509a-2943-4526-8bcc-900dca52a6b0-serviceca\") pod \"node-ca-crzll\" (UID: \"9ff5509a-2943-4526-8bcc-900dca52a6b0\") " pod="openshift-image-registry/node-ca-crzll" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.736598 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j42lc\" (UniqueName: \"kubernetes.io/projected/a985401e-d37f-4c38-a506-93c3f3ccd986-kube-api-access-j42lc\") pod \"node-resolver-dxh2p\" (UID: \"a985401e-d37f-4c38-a506-93c3f3ccd986\") " pod="openshift-dns/node-resolver-dxh2p" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.737086 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79hdn\" (UniqueName: \"kubernetes.io/projected/9ff5509a-2943-4526-8bcc-900dca52a6b0-kube-api-access-79hdn\") pod \"node-ca-crzll\" (UID: \"9ff5509a-2943-4526-8bcc-900dca52a6b0\") " pod="openshift-image-registry/node-ca-crzll" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.737390 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ff5509a-2943-4526-8bcc-900dca52a6b0-host\") pod \"node-ca-crzll\" (UID: \"9ff5509a-2943-4526-8bcc-900dca52a6b0\") " pod="openshift-image-registry/node-ca-crzll" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.737524 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ff5509a-2943-4526-8bcc-900dca52a6b0-host\") pod \"node-ca-crzll\" (UID: \"9ff5509a-2943-4526-8bcc-900dca52a6b0\") " pod="openshift-image-registry/node-ca-crzll" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.738492 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9ff5509a-2943-4526-8bcc-900dca52a6b0-serviceca\") pod \"node-ca-crzll\" (UID: \"9ff5509a-2943-4526-8bcc-900dca52a6b0\") " pod="openshift-image-registry/node-ca-crzll" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.749143 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:55Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.752196 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.752222 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.752232 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.752244 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.752253 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:55Z","lastTransitionTime":"2026-02-02T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.757547 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j42lc\" (UniqueName: \"kubernetes.io/projected/a985401e-d37f-4c38-a506-93c3f3ccd986-kube-api-access-j42lc\") pod \"node-resolver-dxh2p\" (UID: \"a985401e-d37f-4c38-a506-93c3f3ccd986\") " pod="openshift-dns/node-resolver-dxh2p" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.765204 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79hdn\" (UniqueName: \"kubernetes.io/projected/9ff5509a-2943-4526-8bcc-900dca52a6b0-kube-api-access-79hdn\") pod \"node-ca-crzll\" (UID: \"9ff5509a-2943-4526-8bcc-900dca52a6b0\") " pod="openshift-image-registry/node-ca-crzll" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.765356 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:55Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.779278 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:55Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.792148 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:55Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.801423 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:55Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.847914 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dxh2p" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.854494 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.854803 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.854814 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.854829 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.854838 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:55Z","lastTransitionTime":"2026-02-02T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.856732 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-crzll" Feb 02 13:02:55 crc kubenswrapper[4955]: W0202 13:02:55.869231 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ff5509a_2943_4526_8bcc_900dca52a6b0.slice/crio-c414bffdcf33bd8fe72f166b531d4082760fc2f18d1d14767da4226cac56d068 WatchSource:0}: Error finding container c414bffdcf33bd8fe72f166b531d4082760fc2f18d1d14767da4226cac56d068: Status 404 returned error can't find the container with id c414bffdcf33bd8fe72f166b531d4082760fc2f18d1d14767da4226cac56d068 Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.963265 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.963309 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.963319 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.963335 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:55 crc kubenswrapper[4955]: I0202 13:02:55.963346 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:55Z","lastTransitionTime":"2026-02-02T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.065600 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.065638 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.065650 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.065666 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.065678 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:56Z","lastTransitionTime":"2026-02-02T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.167335 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.167363 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.167377 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.167390 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.167398 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:56Z","lastTransitionTime":"2026-02-02T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.269758 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.269786 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.269794 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.269806 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.269815 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:56Z","lastTransitionTime":"2026-02-02T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.327944 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-rplmq"] Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.328689 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rplmq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.330459 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-6l62h"] Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.330731 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-7bpsz"] Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.330892 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z2cps"] Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.338855 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.338875 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.339109 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.339261 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.339369 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.339494 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.339939 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.340208 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.340801 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-host-run-k8s-cni-cncf-io\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.341087 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/959a2015-a670-4ebe-b0a1-d18c1b44cb4a-system-cni-dir\") pod \"multus-additional-cni-plugins-rplmq\" (UID: \"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\") " pod="openshift-multus/multus-additional-cni-plugins-rplmq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.341190 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ppf4\" (UniqueName: \"kubernetes.io/projected/f2f37534-569f-4b2e-989a-f95866cb79e7-kube-api-access-2ppf4\") pod \"machine-config-daemon-6l62h\" (UID: \"f2f37534-569f-4b2e-989a-f95866cb79e7\") " pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.341222 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-run-netns\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.341290 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-run-ovn\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.341361 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lr87\" (UniqueName: \"kubernetes.io/projected/e0d35d22-ea6a-4ada-a086-b199c153c940-kube-api-access-8lr87\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.341433 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dlw7\" (UniqueName: \"kubernetes.io/projected/959a2015-a670-4ebe-b0a1-d18c1b44cb4a-kube-api-access-8dlw7\") pod \"multus-additional-cni-plugins-rplmq\" (UID: \"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\") " pod="openshift-multus/multus-additional-cni-plugins-rplmq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.341457 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.341517 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-kubelet\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.341544 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-systemd-units\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.341649 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-cnibin\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.341723 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-os-release\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.341791 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-slash\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.341879 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.342069 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-multus-cni-dir\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.342947 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8vlp\" (UniqueName: \"kubernetes.io/projected/93e471b4-0f7f-4216-8f9c-911f21b64e1e-kube-api-access-x8vlp\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.342980 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/959a2015-a670-4ebe-b0a1-d18c1b44cb4a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rplmq\" (UID: \"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\") " pod="openshift-multus/multus-additional-cni-plugins-rplmq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.343030 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f2f37534-569f-4b2e-989a-f95866cb79e7-proxy-tls\") pod \"machine-config-daemon-6l62h\" (UID: \"f2f37534-569f-4b2e-989a-f95866cb79e7\") " pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.343049 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-etc-openvswitch\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.343365 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-cni-bin\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.343394 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e0d35d22-ea6a-4ada-a086-b199c153c940-ovnkube-script-lib\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.343419 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/959a2015-a670-4ebe-b0a1-d18c1b44cb4a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rplmq\" (UID: \"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\") " pod="openshift-multus/multus-additional-cni-plugins-rplmq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344120 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-node-log\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344153 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e0d35d22-ea6a-4ada-a086-b199c153c940-ovn-node-metrics-cert\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344184 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344199 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-system-cni-dir\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344213 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/93e471b4-0f7f-4216-8f9c-911f21b64e1e-cni-binary-copy\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344232 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-multus-socket-dir-parent\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344247 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-multus-conf-dir\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344262 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-cni-netd\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344289 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-run-openvswitch\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344304 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e0d35d22-ea6a-4ada-a086-b199c153c940-env-overrides\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344319 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-host-run-netns\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344333 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-etc-kubernetes\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344351 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-run-ovn-kubernetes\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344366 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-hostroot\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344380 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-host-run-multus-certs\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344397 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-log-socket\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344413 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/93e471b4-0f7f-4216-8f9c-911f21b64e1e-multus-daemon-config\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344430 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/959a2015-a670-4ebe-b0a1-d18c1b44cb4a-cnibin\") pod \"multus-additional-cni-plugins-rplmq\" (UID: \"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\") " pod="openshift-multus/multus-additional-cni-plugins-rplmq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344443 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/959a2015-a670-4ebe-b0a1-d18c1b44cb4a-os-release\") pod \"multus-additional-cni-plugins-rplmq\" (UID: \"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\") " pod="openshift-multus/multus-additional-cni-plugins-rplmq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344457 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-host-var-lib-cni-bin\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344472 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f2f37534-569f-4b2e-989a-f95866cb79e7-mcd-auth-proxy-config\") pod \"machine-config-daemon-6l62h\" (UID: \"f2f37534-569f-4b2e-989a-f95866cb79e7\") " pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344487 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/959a2015-a670-4ebe-b0a1-d18c1b44cb4a-cni-binary-copy\") pod \"multus-additional-cni-plugins-rplmq\" (UID: \"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\") " pod="openshift-multus/multus-additional-cni-plugins-rplmq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344503 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-host-var-lib-kubelet\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344518 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-host-var-lib-cni-multus\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344541 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f2f37534-569f-4b2e-989a-f95866cb79e7-rootfs\") pod \"machine-config-daemon-6l62h\" (UID: \"f2f37534-569f-4b2e-989a-f95866cb79e7\") " pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344575 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-run-systemd\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344593 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-var-lib-openvswitch\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344607 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e0d35d22-ea6a-4ada-a086-b199c153c940-ovnkube-config\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.344784 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.345228 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.345341 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.345453 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.345531 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.345827 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.345959 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.346065 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.346139 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.346332 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.346412 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.346481 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.349721 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.365876 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.373890 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.373924 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.373936 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.373953 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.373987 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:56Z","lastTransitionTime":"2026-02-02T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.384045 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.399114 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.413135 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.431596 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.443092 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.445865 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-cni-bin\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.445904 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e0d35d22-ea6a-4ada-a086-b199c153c940-ovnkube-script-lib\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.445926 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/959a2015-a670-4ebe-b0a1-d18c1b44cb4a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rplmq\" (UID: \"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\") " pod="openshift-multus/multus-additional-cni-plugins-rplmq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.446010 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-cni-bin\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.446747 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e0d35d22-ea6a-4ada-a086-b199c153c940-ovnkube-script-lib\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.446765 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/959a2015-a670-4ebe-b0a1-d18c1b44cb4a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rplmq\" (UID: \"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\") " pod="openshift-multus/multus-additional-cni-plugins-rplmq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.446847 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f2f37534-569f-4b2e-989a-f95866cb79e7-proxy-tls\") pod \"machine-config-daemon-6l62h\" (UID: \"f2f37534-569f-4b2e-989a-f95866cb79e7\") " pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.446870 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-etc-openvswitch\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.447312 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-node-log\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.447322 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-etc-openvswitch\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.447342 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e0d35d22-ea6a-4ada-a086-b199c153c940-ovn-node-metrics-cert\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.447361 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.447372 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-node-log\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.447379 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-system-cni-dir\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.447402 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/93e471b4-0f7f-4216-8f9c-911f21b64e1e-cni-binary-copy\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.447433 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-multus-socket-dir-parent\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.447452 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-multus-conf-dir\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.447480 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-run-openvswitch\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.447501 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-cni-netd\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.447523 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e0d35d22-ea6a-4ada-a086-b199c153c940-env-overrides\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.447546 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-host-run-netns\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.447585 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-etc-kubernetes\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.447610 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-run-ovn-kubernetes\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.447630 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-hostroot\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.447691 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-host-run-multus-certs\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.447717 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/93e471b4-0f7f-4216-8f9c-911f21b64e1e-multus-daemon-config\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.447738 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-log-socket\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.447759 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/959a2015-a670-4ebe-b0a1-d18c1b44cb4a-cnibin\") pod \"multus-additional-cni-plugins-rplmq\" (UID: \"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\") " pod="openshift-multus/multus-additional-cni-plugins-rplmq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.447781 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/959a2015-a670-4ebe-b0a1-d18c1b44cb4a-os-release\") pod \"multus-additional-cni-plugins-rplmq\" (UID: \"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\") " pod="openshift-multus/multus-additional-cni-plugins-rplmq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448286 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448328 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-host-var-lib-cni-bin\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448044 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-host-run-multus-certs\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448060 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-etc-kubernetes\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448096 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-run-ovn-kubernetes\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448010 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/959a2015-a670-4ebe-b0a1-d18c1b44cb4a-os-release\") pod \"multus-additional-cni-plugins-rplmq\" (UID: \"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\") " pod="openshift-multus/multus-additional-cni-plugins-rplmq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448132 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-run-openvswitch\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448162 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-multus-conf-dir\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448227 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-multus-socket-dir-parent\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448239 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e0d35d22-ea6a-4ada-a086-b199c153c940-env-overrides\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448254 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-cni-netd\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448254 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-system-cni-dir\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448277 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/959a2015-a670-4ebe-b0a1-d18c1b44cb4a-cnibin\") pod \"multus-additional-cni-plugins-rplmq\" (UID: \"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\") " pod="openshift-multus/multus-additional-cni-plugins-rplmq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448276 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-log-socket\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448355 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f2f37534-569f-4b2e-989a-f95866cb79e7-mcd-auth-proxy-config\") pod \"machine-config-daemon-6l62h\" (UID: \"f2f37534-569f-4b2e-989a-f95866cb79e7\") " pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448060 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-hostroot\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448097 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-host-run-netns\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448518 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/959a2015-a670-4ebe-b0a1-d18c1b44cb4a-cni-binary-copy\") pod \"multus-additional-cni-plugins-rplmq\" (UID: \"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\") " pod="openshift-multus/multus-additional-cni-plugins-rplmq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448544 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-host-var-lib-kubelet\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448616 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f2f37534-569f-4b2e-989a-f95866cb79e7-rootfs\") pod \"machine-config-daemon-6l62h\" (UID: \"f2f37534-569f-4b2e-989a-f95866cb79e7\") " pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448644 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-run-systemd\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448667 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-var-lib-openvswitch\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448688 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e0d35d22-ea6a-4ada-a086-b199c153c940-ovnkube-config\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448761 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/93e471b4-0f7f-4216-8f9c-911f21b64e1e-multus-daemon-config\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448764 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-host-var-lib-cni-multus\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448786 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-host-var-lib-cni-multus\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448811 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f2f37534-569f-4b2e-989a-f95866cb79e7-rootfs\") pod \"machine-config-daemon-6l62h\" (UID: \"f2f37534-569f-4b2e-989a-f95866cb79e7\") " pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448818 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-host-run-k8s-cni-cncf-io\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448841 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-run-systemd\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448845 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ppf4\" (UniqueName: \"kubernetes.io/projected/f2f37534-569f-4b2e-989a-f95866cb79e7-kube-api-access-2ppf4\") pod \"machine-config-daemon-6l62h\" (UID: \"f2f37534-569f-4b2e-989a-f95866cb79e7\") " pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448873 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-run-netns\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448894 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-run-ovn\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448913 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lr87\" (UniqueName: \"kubernetes.io/projected/e0d35d22-ea6a-4ada-a086-b199c153c940-kube-api-access-8lr87\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448972 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/959a2015-a670-4ebe-b0a1-d18c1b44cb4a-system-cni-dir\") pod \"multus-additional-cni-plugins-rplmq\" (UID: \"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\") " pod="openshift-multus/multus-additional-cni-plugins-rplmq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449007 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dlw7\" (UniqueName: \"kubernetes.io/projected/959a2015-a670-4ebe-b0a1-d18c1b44cb4a-kube-api-access-8dlw7\") pod \"multus-additional-cni-plugins-rplmq\" (UID: \"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\") " pod="openshift-multus/multus-additional-cni-plugins-rplmq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449029 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-cnibin\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449044 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f2f37534-569f-4b2e-989a-f95866cb79e7-mcd-auth-proxy-config\") pod \"machine-config-daemon-6l62h\" (UID: \"f2f37534-569f-4b2e-989a-f95866cb79e7\") " pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449058 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-host-run-k8s-cni-cncf-io\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.448779 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/93e471b4-0f7f-4216-8f9c-911f21b64e1e-cni-binary-copy\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449054 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-os-release\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449101 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-host-var-lib-cni-bin\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449099 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-run-netns\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449106 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-kubelet\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449133 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-kubelet\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449142 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-systemd-units\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449176 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-run-ovn\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449201 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-slash\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449223 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-multus-cni-dir\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449247 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8vlp\" (UniqueName: \"kubernetes.io/projected/93e471b4-0f7f-4216-8f9c-911f21b64e1e-kube-api-access-x8vlp\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449250 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/959a2015-a670-4ebe-b0a1-d18c1b44cb4a-cni-binary-copy\") pod \"multus-additional-cni-plugins-rplmq\" (UID: \"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\") " pod="openshift-multus/multus-additional-cni-plugins-rplmq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449269 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/959a2015-a670-4ebe-b0a1-d18c1b44cb4a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rplmq\" (UID: \"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\") " pod="openshift-multus/multus-additional-cni-plugins-rplmq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449283 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-var-lib-openvswitch\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449317 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-slash\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449436 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-os-release\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449444 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-multus-cni-dir\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449473 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-cnibin\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449475 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/93e471b4-0f7f-4216-8f9c-911f21b64e1e-host-var-lib-kubelet\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449489 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/959a2015-a670-4ebe-b0a1-d18c1b44cb4a-system-cni-dir\") pod \"multus-additional-cni-plugins-rplmq\" (UID: \"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\") " pod="openshift-multus/multus-additional-cni-plugins-rplmq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449513 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-systemd-units\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449523 4955 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-02 12:57:55 +0000 UTC, rotation deadline is 2026-12-08 00:22:41.459852298 +0000 UTC Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449577 4955 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7403h19m45.010278346s for next certificate rotation Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.449862 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/959a2015-a670-4ebe-b0a1-d18c1b44cb4a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rplmq\" (UID: \"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\") " pod="openshift-multus/multus-additional-cni-plugins-rplmq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.450068 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e0d35d22-ea6a-4ada-a086-b199c153c940-ovnkube-config\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.452095 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f2f37534-569f-4b2e-989a-f95866cb79e7-proxy-tls\") pod \"machine-config-daemon-6l62h\" (UID: \"f2f37534-569f-4b2e-989a-f95866cb79e7\") " pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.452107 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e0d35d22-ea6a-4ada-a086-b199c153c940-ovn-node-metrics-cert\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.464866 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.469105 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lr87\" (UniqueName: \"kubernetes.io/projected/e0d35d22-ea6a-4ada-a086-b199c153c940-kube-api-access-8lr87\") pod \"ovnkube-node-z2cps\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.475000 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dlw7\" (UniqueName: \"kubernetes.io/projected/959a2015-a670-4ebe-b0a1-d18c1b44cb4a-kube-api-access-8dlw7\") pod \"multus-additional-cni-plugins-rplmq\" (UID: \"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\") " pod="openshift-multus/multus-additional-cni-plugins-rplmq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.475079 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ppf4\" (UniqueName: \"kubernetes.io/projected/f2f37534-569f-4b2e-989a-f95866cb79e7-kube-api-access-2ppf4\") pod \"machine-config-daemon-6l62h\" (UID: \"f2f37534-569f-4b2e-989a-f95866cb79e7\") " pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.475827 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.475873 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.475882 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.475898 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.475933 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:56Z","lastTransitionTime":"2026-02-02T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.480040 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8vlp\" (UniqueName: \"kubernetes.io/projected/93e471b4-0f7f-4216-8f9c-911f21b64e1e-kube-api-access-x8vlp\") pod \"multus-7bpsz\" (UID: \"93e471b4-0f7f-4216-8f9c-911f21b64e1e\") " pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.499575 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.518360 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.537016 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.547250 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.559993 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.570389 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.577969 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.578011 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.578023 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.578058 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.578068 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:56Z","lastTransitionTime":"2026-02-02T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.582347 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.593602 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.602735 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.611773 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.622762 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.634063 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.646041 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.655029 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rplmq" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.656923 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.662196 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7bpsz" Feb 02 13:02:56 crc kubenswrapper[4955]: W0202 13:02:56.669358 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod959a2015_a670_4ebe_b0a1_d18c1b44cb4a.slice/crio-63b5a88559ee0822bfb7a25c7828f429bb3984f13a034436c2104c7e415bcb8a WatchSource:0}: Error finding container 63b5a88559ee0822bfb7a25c7828f429bb3984f13a034436c2104c7e415bcb8a: Status 404 returned error can't find the container with id 63b5a88559ee0822bfb7a25c7828f429bb3984f13a034436c2104c7e415bcb8a Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.670354 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.673456 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: W0202 13:02:56.674023 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93e471b4_0f7f_4216_8f9c_911f21b64e1e.slice/crio-dff5381f51659c924b2d7155d06c8557ac013584a9f2b04dea707563507d3843 WatchSource:0}: Error finding container dff5381f51659c924b2d7155d06c8557ac013584a9f2b04dea707563507d3843: Status 404 returned error can't find the container with id dff5381f51659c924b2d7155d06c8557ac013584a9f2b04dea707563507d3843 Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.678021 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.678433 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 05:25:00.612823051 +0000 UTC Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.679646 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.679757 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.679864 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.679948 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.680014 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:56Z","lastTransitionTime":"2026-02-02T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:56 crc kubenswrapper[4955]: W0202 13:02:56.690516 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2f37534_569f_4b2e_989a_f95866cb79e7.slice/crio-849c32075162ad0852fb1ece063c56e460d473f48ae32ce01cdefc6ba31002e9 WatchSource:0}: Error finding container 849c32075162ad0852fb1ece063c56e460d473f48ae32ce01cdefc6ba31002e9: Status 404 returned error can't find the container with id 849c32075162ad0852fb1ece063c56e460d473f48ae32ce01cdefc6ba31002e9 Feb 02 13:02:56 crc kubenswrapper[4955]: W0202 13:02:56.711256 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0d35d22_ea6a_4ada_a086_b199c153c940.slice/crio-c1189fb33c8dc3937d32079c51d8b7d30e954f132051330fbb745dc397111107 WatchSource:0}: Error finding container c1189fb33c8dc3937d32079c51d8b7d30e954f132051330fbb745dc397111107: Status 404 returned error can't find the container with id c1189fb33c8dc3937d32079c51d8b7d30e954f132051330fbb745dc397111107 Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.715999 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:56 crc kubenswrapper[4955]: E0202 13:02:56.716179 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.783225 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.783264 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.783274 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.783291 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.783300 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:56Z","lastTransitionTime":"2026-02-02T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.853499 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerStarted","Data":"7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d"} Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.853542 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerStarted","Data":"849c32075162ad0852fb1ece063c56e460d473f48ae32ce01cdefc6ba31002e9"} Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.854730 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7bpsz" event={"ID":"93e471b4-0f7f-4216-8f9c-911f21b64e1e","Type":"ContainerStarted","Data":"7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981"} Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.854787 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7bpsz" event={"ID":"93e471b4-0f7f-4216-8f9c-911f21b64e1e","Type":"ContainerStarted","Data":"dff5381f51659c924b2d7155d06c8557ac013584a9f2b04dea707563507d3843"} Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.856041 4955 generic.go:334] "Generic (PLEG): container finished" podID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerID="830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14" exitCode=0 Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.856111 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerDied","Data":"830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14"} Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.856141 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerStarted","Data":"c1189fb33c8dc3937d32079c51d8b7d30e954f132051330fbb745dc397111107"} Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.858654 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" event={"ID":"959a2015-a670-4ebe-b0a1-d18c1b44cb4a","Type":"ContainerStarted","Data":"63b5a88559ee0822bfb7a25c7828f429bb3984f13a034436c2104c7e415bcb8a"} Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.860850 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dxh2p" event={"ID":"a985401e-d37f-4c38-a506-93c3f3ccd986","Type":"ContainerStarted","Data":"de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55"} Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.860880 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dxh2p" event={"ID":"a985401e-d37f-4c38-a506-93c3f3ccd986","Type":"ContainerStarted","Data":"f6fc94c2889684e796849c6cb784baab838363c087739e0ca12db5dc40bbd9ad"} Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.863477 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-crzll" event={"ID":"9ff5509a-2943-4526-8bcc-900dca52a6b0","Type":"ContainerStarted","Data":"c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146"} Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.863532 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-crzll" event={"ID":"9ff5509a-2943-4526-8bcc-900dca52a6b0","Type":"ContainerStarted","Data":"c414bffdcf33bd8fe72f166b531d4082760fc2f18d1d14767da4226cac56d068"} Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.867413 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.879238 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.886896 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.886939 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.886954 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.886974 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.886989 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:56Z","lastTransitionTime":"2026-02-02T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.898275 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.912306 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.925550 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.939881 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.951013 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.971217 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.991373 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.991779 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.991788 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.991802 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.991810 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:56Z","lastTransitionTime":"2026-02-02T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:56 crc kubenswrapper[4955]: I0202 13:02:56.994743 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.008400 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.023544 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.037923 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.055019 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.067612 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.079401 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.089405 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.095643 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.095690 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.095701 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.095719 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.095729 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:57Z","lastTransitionTime":"2026-02-02T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.101981 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.114520 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.126266 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.139390 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.158149 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.171735 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.182583 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.193994 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.197447 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.197481 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.197497 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.197512 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.197523 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:57Z","lastTransitionTime":"2026-02-02T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.207260 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.219396 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.300419 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.300470 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.300481 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.300499 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.300511 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:57Z","lastTransitionTime":"2026-02-02T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.403155 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.403190 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.403201 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.403215 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.403224 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:57Z","lastTransitionTime":"2026-02-02T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.505396 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.505422 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.505433 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.505447 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.505459 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:57Z","lastTransitionTime":"2026-02-02T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.560594 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:02:57 crc kubenswrapper[4955]: E0202 13:02:57.560698 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:05.560671769 +0000 UTC m=+36.473008219 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.560835 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.560890 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.560930 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.560976 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:57 crc kubenswrapper[4955]: E0202 13:02:57.561206 4955 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:02:57 crc kubenswrapper[4955]: E0202 13:02:57.561722 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:02:57 crc kubenswrapper[4955]: E0202 13:02:57.561685 4955 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:02:57 crc kubenswrapper[4955]: E0202 13:02:57.561776 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:02:57 crc kubenswrapper[4955]: E0202 13:02:57.561764 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:03:05.561747633 +0000 UTC m=+36.474084093 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:02:57 crc kubenswrapper[4955]: E0202 13:02:57.561805 4955 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:57 crc kubenswrapper[4955]: E0202 13:02:57.561817 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:03:05.561805764 +0000 UTC m=+36.474142214 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:02:57 crc kubenswrapper[4955]: E0202 13:02:57.561750 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:02:57 crc kubenswrapper[4955]: E0202 13:02:57.561872 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:02:57 crc kubenswrapper[4955]: E0202 13:02:57.561883 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 13:03:05.561852735 +0000 UTC m=+36.474189205 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:57 crc kubenswrapper[4955]: E0202 13:02:57.561892 4955 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:57 crc kubenswrapper[4955]: E0202 13:02:57.561935 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 13:03:05.561924367 +0000 UTC m=+36.474261027 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.607065 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.607087 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.607096 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.607108 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.607117 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:57Z","lastTransitionTime":"2026-02-02T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.680194 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 13:19:54.73059874 +0000 UTC Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.708729 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.708766 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.708778 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.708794 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.708815 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:57Z","lastTransitionTime":"2026-02-02T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.716145 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.716228 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:57 crc kubenswrapper[4955]: E0202 13:02:57.716288 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:57 crc kubenswrapper[4955]: E0202 13:02:57.716345 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.811266 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.811307 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.811322 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.811341 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.811353 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:57Z","lastTransitionTime":"2026-02-02T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.868795 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerStarted","Data":"7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6"} Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.868835 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerStarted","Data":"a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411"} Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.868845 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerStarted","Data":"f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36"} Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.868854 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerStarted","Data":"2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e"} Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.868863 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerStarted","Data":"06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b"} Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.868873 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerStarted","Data":"f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862"} Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.869949 4955 generic.go:334] "Generic (PLEG): container finished" podID="959a2015-a670-4ebe-b0a1-d18c1b44cb4a" containerID="5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641" exitCode=0 Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.870021 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" event={"ID":"959a2015-a670-4ebe-b0a1-d18c1b44cb4a","Type":"ContainerDied","Data":"5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641"} Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.872069 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerStarted","Data":"0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed"} Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.883693 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.894321 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.905657 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.913993 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.914038 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.914049 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.914069 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.914080 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:57Z","lastTransitionTime":"2026-02-02T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.915241 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.925776 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.939446 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.950676 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.978026 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:57 crc kubenswrapper[4955]: I0202 13:02:57.993172 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.005970 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:58Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.016811 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.016865 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.017631 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.017711 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.017722 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:58Z","lastTransitionTime":"2026-02-02T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.018954 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:58Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.032891 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:58Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.043312 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:58Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.055291 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:58Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.066934 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:58Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.082734 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:58Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.095586 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:58Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.105955 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:58Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.114956 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:58Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.119863 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.119920 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.119932 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.119947 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.119979 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:58Z","lastTransitionTime":"2026-02-02T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.127368 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:58Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.137121 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:58Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.150530 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:58Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.158455 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:58Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.168627 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:58Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.179148 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:58Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.194466 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:58Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.222247 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.222364 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.222379 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.222395 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.222408 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:58Z","lastTransitionTime":"2026-02-02T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.325133 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.325164 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.325175 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.325223 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.325233 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:58Z","lastTransitionTime":"2026-02-02T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.427020 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.427054 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.427064 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.427079 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.427088 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:58Z","lastTransitionTime":"2026-02-02T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.529326 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.529360 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.529368 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.529383 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.529393 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:58Z","lastTransitionTime":"2026-02-02T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.536574 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.537396 4955 scope.go:117] "RemoveContainer" containerID="b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a" Feb 02 13:02:58 crc kubenswrapper[4955]: E0202 13:02:58.537591 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.631182 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.631227 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.631239 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.631256 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.631268 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:58Z","lastTransitionTime":"2026-02-02T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.681109 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:01:47.092408762 +0000 UTC Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.715431 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:58 crc kubenswrapper[4955]: E0202 13:02:58.715578 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.733755 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.733847 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.733864 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.733883 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.733894 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:58Z","lastTransitionTime":"2026-02-02T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.835963 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.836002 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.836013 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.836028 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.836042 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:58Z","lastTransitionTime":"2026-02-02T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.938663 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.938739 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.938751 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.938768 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:58 crc kubenswrapper[4955]: I0202 13:02:58.938779 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:58Z","lastTransitionTime":"2026-02-02T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.041401 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.041440 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.041451 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.041466 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.041479 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:59Z","lastTransitionTime":"2026-02-02T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.143878 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.143923 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.143936 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.143954 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.143971 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:59Z","lastTransitionTime":"2026-02-02T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.246487 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.246528 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.246543 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.246571 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.246582 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:59Z","lastTransitionTime":"2026-02-02T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.349011 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.349048 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.349058 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.349072 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.349081 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:59Z","lastTransitionTime":"2026-02-02T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.451461 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.451514 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.451530 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.451551 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.451589 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:59Z","lastTransitionTime":"2026-02-02T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.519175 4955 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.554683 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.554716 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.554725 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.554740 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.554751 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:59Z","lastTransitionTime":"2026-02-02T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.657255 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.657316 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.657334 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.657359 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.657375 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:59Z","lastTransitionTime":"2026-02-02T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.681787 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 07:05:37.600463925 +0000 UTC Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.716260 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:59 crc kubenswrapper[4955]: E0202 13:02:59.716370 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.716758 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:59 crc kubenswrapper[4955]: E0202 13:02:59.716844 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.734149 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.748442 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.759261 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.759309 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.759324 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.759341 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.759353 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:59Z","lastTransitionTime":"2026-02-02T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.760140 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.774180 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.794263 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.806829 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.820083 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.833603 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:59 crc kubenswrapper[4955]: I0202 13:02:59.846150 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.431318 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.431829 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 04:52:24.641568318 +0000 UTC Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.434795 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:02 crc kubenswrapper[4955]: E0202 13:03:02.434921 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.436230 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.436261 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:02 crc kubenswrapper[4955]: E0202 13:03:02.436361 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:02 crc kubenswrapper[4955]: E0202 13:03:02.436610 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.436931 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.436958 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.436969 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.436987 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.437025 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:02Z","lastTransitionTime":"2026-02-02T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.450178 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:02Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.465477 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:02Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.488542 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:02Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.539173 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.539410 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.539419 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.539431 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.539441 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:02Z","lastTransitionTime":"2026-02-02T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.642602 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.642641 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.642651 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.642670 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.642680 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:02Z","lastTransitionTime":"2026-02-02T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.744959 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.744990 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.744999 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.745012 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.745022 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:02Z","lastTransitionTime":"2026-02-02T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.847751 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.848206 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.848284 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.848346 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.848409 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:02Z","lastTransitionTime":"2026-02-02T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.951640 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.951691 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.951706 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.951726 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:02 crc kubenswrapper[4955]: I0202 13:03:02.951742 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:02Z","lastTransitionTime":"2026-02-02T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.054960 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.055266 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.055276 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.055293 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.055304 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:03Z","lastTransitionTime":"2026-02-02T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.157930 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.157989 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.158012 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.158042 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.158064 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:03Z","lastTransitionTime":"2026-02-02T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.260422 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.260449 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.260460 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.260472 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.260481 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:03Z","lastTransitionTime":"2026-02-02T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.363021 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.363060 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.363070 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.363085 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.363095 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:03Z","lastTransitionTime":"2026-02-02T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.432901 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 06:37:19.566990358 +0000 UTC Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.445308 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerStarted","Data":"9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4"} Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.447801 4955 generic.go:334] "Generic (PLEG): container finished" podID="959a2015-a670-4ebe-b0a1-d18c1b44cb4a" containerID="ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe" exitCode=0 Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.447857 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" event={"ID":"959a2015-a670-4ebe-b0a1-d18c1b44cb4a","Type":"ContainerDied","Data":"ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe"} Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.460920 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.465303 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.465423 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.465492 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.465590 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.465678 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:03Z","lastTransitionTime":"2026-02-02T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.475201 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.497832 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.511438 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.523990 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.539830 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.578983 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.579981 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.580079 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.580138 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.580196 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.580257 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:03Z","lastTransitionTime":"2026-02-02T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.617198 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.636209 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.650357 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.667337 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.682026 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.683191 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.683228 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.683238 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.683255 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.683271 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:03Z","lastTransitionTime":"2026-02-02T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.701402 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.715309 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:03 crc kubenswrapper[4955]: E0202 13:03:03.715822 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.786590 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.786698 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.786759 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.786827 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.786884 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:03Z","lastTransitionTime":"2026-02-02T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.889598 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.889657 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.889671 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.889691 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.889705 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:03Z","lastTransitionTime":"2026-02-02T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.992050 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.992359 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.992460 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.992536 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:03 crc kubenswrapper[4955]: I0202 13:03:03.992640 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:03Z","lastTransitionTime":"2026-02-02T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.095450 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.095500 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.095514 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.095536 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.095551 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:04Z","lastTransitionTime":"2026-02-02T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.198252 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.198288 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.198298 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.198311 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.198319 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:04Z","lastTransitionTime":"2026-02-02T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.300442 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.300567 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.300581 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.300597 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.300607 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:04Z","lastTransitionTime":"2026-02-02T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.404606 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.404878 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.404959 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.405036 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.405097 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:04Z","lastTransitionTime":"2026-02-02T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.434073 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 01:42:03.870686149 +0000 UTC Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.454171 4955 generic.go:334] "Generic (PLEG): container finished" podID="959a2015-a670-4ebe-b0a1-d18c1b44cb4a" containerID="50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132" exitCode=0 Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.454216 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" event={"ID":"959a2015-a670-4ebe-b0a1-d18c1b44cb4a","Type":"ContainerDied","Data":"50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132"} Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.468130 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.494546 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.507653 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.507697 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.507708 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.507724 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.507739 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:04Z","lastTransitionTime":"2026-02-02T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.509782 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.524325 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.538657 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.551216 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.560864 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.573239 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.584879 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.595510 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.608619 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.619635 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.628585 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.628616 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.628628 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.628645 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.628658 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:04Z","lastTransitionTime":"2026-02-02T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.637131 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.716132 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.716201 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:04 crc kubenswrapper[4955]: E0202 13:03:04.716273 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:04 crc kubenswrapper[4955]: E0202 13:03:04.716387 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.730987 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.731034 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.731048 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.731069 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.731083 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:04Z","lastTransitionTime":"2026-02-02T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.833941 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.834228 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.834236 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.834250 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.834259 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:04Z","lastTransitionTime":"2026-02-02T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.937294 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.937323 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.937332 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.937345 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:04 crc kubenswrapper[4955]: I0202 13:03:04.937353 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:04Z","lastTransitionTime":"2026-02-02T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.039870 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.039906 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.039916 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.039932 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.039953 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:05Z","lastTransitionTime":"2026-02-02T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.142243 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.142275 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.142287 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.142305 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.142315 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:05Z","lastTransitionTime":"2026-02-02T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.245635 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.245674 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.245687 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.245704 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.245717 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:05Z","lastTransitionTime":"2026-02-02T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.348363 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.348444 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.348472 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.348504 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.348528 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:05Z","lastTransitionTime":"2026-02-02T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.349873 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.349934 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.349955 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.349975 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.349988 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:05Z","lastTransitionTime":"2026-02-02T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:05 crc kubenswrapper[4955]: E0202 13:03:05.368760 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.372886 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.372919 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.372933 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.372949 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.372960 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:05Z","lastTransitionTime":"2026-02-02T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:05 crc kubenswrapper[4955]: E0202 13:03:05.385532 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.389255 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.389322 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.389342 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.389368 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.389389 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:05Z","lastTransitionTime":"2026-02-02T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:05 crc kubenswrapper[4955]: E0202 13:03:05.408489 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.413706 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.414244 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.414273 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.414303 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.414324 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:05Z","lastTransitionTime":"2026-02-02T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:05 crc kubenswrapper[4955]: E0202 13:03:05.431037 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.435074 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 17:25:22.7108164 +0000 UTC Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.436627 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.436666 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.436687 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.436715 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.436730 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:05Z","lastTransitionTime":"2026-02-02T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:05 crc kubenswrapper[4955]: E0202 13:03:05.452526 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: E0202 13:03:05.452747 4955 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.454850 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.454919 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.454938 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.454958 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.454973 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:05Z","lastTransitionTime":"2026-02-02T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.462304 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerStarted","Data":"f2925759f7eaa0571016fd3cbdd0c6610eab5587330fa591578a773900b689cb"} Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.462869 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.463074 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.467669 4955 generic.go:334] "Generic (PLEG): container finished" podID="959a2015-a670-4ebe-b0a1-d18c1b44cb4a" containerID="fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a" exitCode=0 Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.467933 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" event={"ID":"959a2015-a670-4ebe-b0a1-d18c1b44cb4a","Type":"ContainerDied","Data":"fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a"} Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.488848 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2925759f7eaa0571016fd3cbdd0c6610eab5587330fa591578a773900b689cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.491574 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.491934 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.509149 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.524515 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.543279 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.555197 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.560184 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.560228 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.560242 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.560260 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.560273 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:05Z","lastTransitionTime":"2026-02-02T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.567308 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.581361 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.594324 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.605776 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.616083 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.626893 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.643571 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.653915 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.662324 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.662433 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:05 crc kubenswrapper[4955]: E0202 13:03:05.662481 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:21.662453321 +0000 UTC m=+52.574789771 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:05 crc kubenswrapper[4955]: E0202 13:03:05.662593 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:03:05 crc kubenswrapper[4955]: E0202 13:03:05.662613 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:03:05 crc kubenswrapper[4955]: E0202 13:03:05.662628 4955 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.662630 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:05 crc kubenswrapper[4955]: E0202 13:03:05.662676 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 13:03:21.662659745 +0000 UTC m=+52.574996205 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.662721 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.662749 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:05 crc kubenswrapper[4955]: E0202 13:03:05.662821 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:03:05 crc kubenswrapper[4955]: E0202 13:03:05.662834 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:03:05 crc kubenswrapper[4955]: E0202 13:03:05.662845 4955 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:03:05 crc kubenswrapper[4955]: E0202 13:03:05.662874 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 13:03:21.66286324 +0000 UTC m=+52.575199700 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:03:05 crc kubenswrapper[4955]: E0202 13:03:05.662981 4955 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:03:05 crc kubenswrapper[4955]: E0202 13:03:05.663026 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:03:21.663015293 +0000 UTC m=+52.575351813 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:03:05 crc kubenswrapper[4955]: E0202 13:03:05.663119 4955 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:03:05 crc kubenswrapper[4955]: E0202 13:03:05.663156 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:03:21.663146746 +0000 UTC m=+52.575483216 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.664905 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.664985 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.665011 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.665042 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.665088 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:05Z","lastTransitionTime":"2026-02-02T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.665378 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.682509 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.692786 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.702653 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.711622 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.716199 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:05 crc kubenswrapper[4955]: E0202 13:03:05.716313 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.723497 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.735868 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.795648 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2925759f7eaa0571016fd3cbdd0c6610eab5587330fa591578a773900b689cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.797266 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.797310 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.797321 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.797335 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.797344 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:05Z","lastTransitionTime":"2026-02-02T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.810241 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.822116 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.834472 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.847200 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.858756 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:05Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.899506 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.899545 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.899568 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.899583 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:05 crc kubenswrapper[4955]: I0202 13:03:05.899592 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:05Z","lastTransitionTime":"2026-02-02T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.002066 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.002255 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.002314 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.002377 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.002468 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:06Z","lastTransitionTime":"2026-02-02T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.104596 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.104637 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.104647 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.104661 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.104670 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:06Z","lastTransitionTime":"2026-02-02T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.207874 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.207939 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.207958 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.207980 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.208003 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:06Z","lastTransitionTime":"2026-02-02T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.310315 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.310369 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.310387 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.310408 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.310420 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:06Z","lastTransitionTime":"2026-02-02T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.413462 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.413509 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.413522 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.413543 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.413571 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:06Z","lastTransitionTime":"2026-02-02T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.436144 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 09:07:05.615461345 +0000 UTC Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.474868 4955 generic.go:334] "Generic (PLEG): container finished" podID="959a2015-a670-4ebe-b0a1-d18c1b44cb4a" containerID="b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32" exitCode=0 Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.474974 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" event={"ID":"959a2015-a670-4ebe-b0a1-d18c1b44cb4a","Type":"ContainerDied","Data":"b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32"} Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.475069 4955 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.492233 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:06Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.512482 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:06Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.517012 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.517066 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.517082 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.517102 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.517116 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:06Z","lastTransitionTime":"2026-02-02T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.530007 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:06Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.545893 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:06Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.560190 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:06Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.575503 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:06Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.589695 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:06Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.608354 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2925759f7eaa0571016fd3cbdd0c6610eab5587330fa591578a773900b689cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:06Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.623226 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:06Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.625754 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.625793 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.625806 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.625823 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.625835 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:06Z","lastTransitionTime":"2026-02-02T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.640085 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:06Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.654731 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:06Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.666610 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:06Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.682424 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:06Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.716100 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:06 crc kubenswrapper[4955]: E0202 13:03:06.716202 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.716483 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:06 crc kubenswrapper[4955]: E0202 13:03:06.716537 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.727938 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.727969 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.727985 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.728001 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.728009 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:06Z","lastTransitionTime":"2026-02-02T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.830165 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.830220 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.830234 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.830253 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.830269 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:06Z","lastTransitionTime":"2026-02-02T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.932103 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.932145 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.932157 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.932174 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:06 crc kubenswrapper[4955]: I0202 13:03:06.932185 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:06Z","lastTransitionTime":"2026-02-02T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.034951 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.035001 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.035014 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.035030 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.035041 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:07Z","lastTransitionTime":"2026-02-02T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.137280 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.137348 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.137364 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.137388 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.137403 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:07Z","lastTransitionTime":"2026-02-02T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.239599 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.239642 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.239652 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.239668 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.239679 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:07Z","lastTransitionTime":"2026-02-02T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.341988 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.342027 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.342038 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.342052 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.342066 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:07Z","lastTransitionTime":"2026-02-02T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.436538 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 17:49:33.00164804 +0000 UTC Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.447920 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.447950 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.447960 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.447974 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.447983 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:07Z","lastTransitionTime":"2026-02-02T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.479729 4955 generic.go:334] "Generic (PLEG): container finished" podID="959a2015-a670-4ebe-b0a1-d18c1b44cb4a" containerID="cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a" exitCode=0 Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.479858 4955 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.479933 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" event={"ID":"959a2015-a670-4ebe-b0a1-d18c1b44cb4a","Type":"ContainerDied","Data":"cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a"} Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.491108 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:07Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.504815 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:07Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.517692 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:07Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.529860 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:07Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.543828 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:07Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.552358 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.552390 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.552402 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.552416 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.552425 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:07Z","lastTransitionTime":"2026-02-02T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.556504 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:07Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.567881 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:07Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.588222 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2925759f7eaa0571016fd3cbdd0c6610eab5587330fa591578a773900b689cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:07Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.600427 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:07Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.612987 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:07Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.625548 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:07Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.639281 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:07Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.652814 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:07Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.654216 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.654241 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.654253 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.654268 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.654280 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:07Z","lastTransitionTime":"2026-02-02T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.716187 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:07 crc kubenswrapper[4955]: E0202 13:03:07.716318 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.757094 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.757131 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.757142 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.757158 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.757196 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:07Z","lastTransitionTime":"2026-02-02T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.859579 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.859630 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.859641 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.859654 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.859664 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:07Z","lastTransitionTime":"2026-02-02T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.962299 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.962327 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.962336 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.962349 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:07 crc kubenswrapper[4955]: I0202 13:03:07.962359 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:07Z","lastTransitionTime":"2026-02-02T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.064663 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.064696 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.064706 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.064719 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.064728 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:08Z","lastTransitionTime":"2026-02-02T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.167421 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.167473 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.167490 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.167512 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.167528 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:08Z","lastTransitionTime":"2026-02-02T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.270767 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.270825 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.270851 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.270881 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.270901 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:08Z","lastTransitionTime":"2026-02-02T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.373382 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.373433 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.373449 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.373468 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.373481 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:08Z","lastTransitionTime":"2026-02-02T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.437598 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 03:54:15.820546846 +0000 UTC Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.477931 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.478219 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.478318 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.478442 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.478767 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:08Z","lastTransitionTime":"2026-02-02T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.484887 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z2cps_e0d35d22-ea6a-4ada-a086-b199c153c940/ovnkube-controller/0.log" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.488731 4955 generic.go:334] "Generic (PLEG): container finished" podID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerID="f2925759f7eaa0571016fd3cbdd0c6610eab5587330fa591578a773900b689cb" exitCode=1 Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.488814 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerDied","Data":"f2925759f7eaa0571016fd3cbdd0c6610eab5587330fa591578a773900b689cb"} Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.489675 4955 scope.go:117] "RemoveContainer" containerID="f2925759f7eaa0571016fd3cbdd0c6610eab5587330fa591578a773900b689cb" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.493369 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" event={"ID":"959a2015-a670-4ebe-b0a1-d18c1b44cb4a","Type":"ContainerStarted","Data":"9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020"} Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.510985 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.527992 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.541950 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.556226 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.570266 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.581357 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.581420 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.581436 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.581475 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.581488 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:08Z","lastTransitionTime":"2026-02-02T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.584822 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.599257 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.611086 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.623880 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.637146 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.650479 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.662723 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.684130 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.684162 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.684170 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.684183 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.684192 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:08Z","lastTransitionTime":"2026-02-02T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.687740 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2925759f7eaa0571016fd3cbdd0c6610eab5587330fa591578a773900b689cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2925759f7eaa0571016fd3cbdd0c6610eab5587330fa591578a773900b689cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"message\\\":\\\"topping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:03:07.549655 6190 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 13:03:07.549669 6190 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 13:03:07.549698 6190 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 13:03:07.549703 6190 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 13:03:07.549707 6190 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 13:03:07.549714 6190 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 13:03:07.549719 6190 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 13:03:07.549727 6190 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 13:03:07.549734 6190 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 13:03:07.549745 6190 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 13:03:07.549749 6190 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 13:03:07.549769 6190 factory.go:656] Stopping watch factory\\\\nI0202 13:03:07.549786 6190 ovnkube.go:599] Stopped ovnkube\\\\nI0202 13:03:07.549802 6190 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 13:03:07.549806 6190 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.710593 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2925759f7eaa0571016fd3cbdd0c6610eab5587330fa591578a773900b689cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2925759f7eaa0571016fd3cbdd0c6610eab5587330fa591578a773900b689cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"message\\\":\\\"topping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:03:07.549655 6190 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 13:03:07.549669 6190 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 13:03:07.549698 6190 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 13:03:07.549703 6190 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 13:03:07.549707 6190 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 13:03:07.549714 6190 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 13:03:07.549719 6190 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 13:03:07.549727 6190 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 13:03:07.549734 6190 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 13:03:07.549745 6190 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 13:03:07.549749 6190 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 13:03:07.549769 6190 factory.go:656] Stopping watch factory\\\\nI0202 13:03:07.549786 6190 ovnkube.go:599] Stopped ovnkube\\\\nI0202 13:03:07.549802 6190 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 13:03:07.549806 6190 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.715340 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.715340 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:08 crc kubenswrapper[4955]: E0202 13:03:08.715454 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:08 crc kubenswrapper[4955]: E0202 13:03:08.715582 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.727532 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.744896 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.760330 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.771548 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.783776 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.786672 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.786703 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.786712 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.786726 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.786738 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:08Z","lastTransitionTime":"2026-02-02T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.798056 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v"] Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.798534 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.800502 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.800597 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.801659 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.815209 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.824279 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9edb0d45-28ef-4cd7-8a24-c720c2d23382-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2cq7v\" (UID: \"9edb0d45-28ef-4cd7-8a24-c720c2d23382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.824329 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbf4l\" (UniqueName: \"kubernetes.io/projected/9edb0d45-28ef-4cd7-8a24-c720c2d23382-kube-api-access-pbf4l\") pod \"ovnkube-control-plane-749d76644c-2cq7v\" (UID: \"9edb0d45-28ef-4cd7-8a24-c720c2d23382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.824349 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9edb0d45-28ef-4cd7-8a24-c720c2d23382-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2cq7v\" (UID: \"9edb0d45-28ef-4cd7-8a24-c720c2d23382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.824397 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9edb0d45-28ef-4cd7-8a24-c720c2d23382-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2cq7v\" (UID: \"9edb0d45-28ef-4cd7-8a24-c720c2d23382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.827634 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.837725 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.847752 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.859966 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.871736 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.881937 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.889616 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.889826 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.889887 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.889952 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.890043 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:08Z","lastTransitionTime":"2026-02-02T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.896363 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.906831 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.918635 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.924753 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbf4l\" (UniqueName: \"kubernetes.io/projected/9edb0d45-28ef-4cd7-8a24-c720c2d23382-kube-api-access-pbf4l\") pod \"ovnkube-control-plane-749d76644c-2cq7v\" (UID: \"9edb0d45-28ef-4cd7-8a24-c720c2d23382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.924791 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9edb0d45-28ef-4cd7-8a24-c720c2d23382-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2cq7v\" (UID: \"9edb0d45-28ef-4cd7-8a24-c720c2d23382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.924825 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9edb0d45-28ef-4cd7-8a24-c720c2d23382-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2cq7v\" (UID: \"9edb0d45-28ef-4cd7-8a24-c720c2d23382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.924847 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9edb0d45-28ef-4cd7-8a24-c720c2d23382-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2cq7v\" (UID: \"9edb0d45-28ef-4cd7-8a24-c720c2d23382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.925994 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9edb0d45-28ef-4cd7-8a24-c720c2d23382-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2cq7v\" (UID: \"9edb0d45-28ef-4cd7-8a24-c720c2d23382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.926189 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9edb0d45-28ef-4cd7-8a24-c720c2d23382-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2cq7v\" (UID: \"9edb0d45-28ef-4cd7-8a24-c720c2d23382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.935724 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9edb0d45-28ef-4cd7-8a24-c720c2d23382-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2cq7v\" (UID: \"9edb0d45-28ef-4cd7-8a24-c720c2d23382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.940009 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.944753 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbf4l\" (UniqueName: \"kubernetes.io/projected/9edb0d45-28ef-4cd7-8a24-c720c2d23382-kube-api-access-pbf4l\") pod \"ovnkube-control-plane-749d76644c-2cq7v\" (UID: \"9edb0d45-28ef-4cd7-8a24-c720c2d23382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.952809 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.964930 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.990534 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2925759f7eaa0571016fd3cbdd0c6610eab5587330fa591578a773900b689cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2925759f7eaa0571016fd3cbdd0c6610eab5587330fa591578a773900b689cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"message\\\":\\\"topping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:03:07.549655 6190 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 13:03:07.549669 6190 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 13:03:07.549698 6190 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 13:03:07.549703 6190 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 13:03:07.549707 6190 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 13:03:07.549714 6190 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 13:03:07.549719 6190 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 13:03:07.549727 6190 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 13:03:07.549734 6190 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 13:03:07.549745 6190 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 13:03:07.549749 6190 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 13:03:07.549769 6190 factory.go:656] Stopping watch factory\\\\nI0202 13:03:07.549786 6190 ovnkube.go:599] Stopped ovnkube\\\\nI0202 13:03:07.549802 6190 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 13:03:07.549806 6190 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:08Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.991957 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.992001 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.992016 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.992037 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:08 crc kubenswrapper[4955]: I0202 13:03:08.992051 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:08Z","lastTransitionTime":"2026-02-02T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.004896 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.019482 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.030290 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9edb0d45-28ef-4cd7-8a24-c720c2d23382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2cq7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.046629 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.063840 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.077237 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.093571 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.093608 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.093618 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.093635 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.093645 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:09Z","lastTransitionTime":"2026-02-02T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.111846 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" Feb 02 13:03:09 crc kubenswrapper[4955]: W0202 13:03:09.123716 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9edb0d45_28ef_4cd7_8a24_c720c2d23382.slice/crio-24046ed963e01dc6084c0b9034e87dcd401b97cf07f0752f3c73f7ae129913c6 WatchSource:0}: Error finding container 24046ed963e01dc6084c0b9034e87dcd401b97cf07f0752f3c73f7ae129913c6: Status 404 returned error can't find the container with id 24046ed963e01dc6084c0b9034e87dcd401b97cf07f0752f3c73f7ae129913c6 Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.195216 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.195255 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.195274 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.195291 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.195302 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:09Z","lastTransitionTime":"2026-02-02T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.296729 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.296754 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.296763 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.296776 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.296785 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:09Z","lastTransitionTime":"2026-02-02T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.399308 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.399336 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.399345 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.399359 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.399370 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:09Z","lastTransitionTime":"2026-02-02T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.438071 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 08:36:07.083392258 +0000 UTC Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.497919 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z2cps_e0d35d22-ea6a-4ada-a086-b199c153c940/ovnkube-controller/1.log" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.498427 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z2cps_e0d35d22-ea6a-4ada-a086-b199c153c940/ovnkube-controller/0.log" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.500510 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.500534 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.500543 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.500571 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.500583 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:09Z","lastTransitionTime":"2026-02-02T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.501179 4955 generic.go:334] "Generic (PLEG): container finished" podID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerID="f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b" exitCode=1 Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.501223 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerDied","Data":"f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b"} Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.501248 4955 scope.go:117] "RemoveContainer" containerID="f2925759f7eaa0571016fd3cbdd0c6610eab5587330fa591578a773900b689cb" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.501792 4955 scope.go:117] "RemoveContainer" containerID="f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b" Feb 02 13:03:09 crc kubenswrapper[4955]: E0202 13:03:09.501909 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z2cps_openshift-ovn-kubernetes(e0d35d22-ea6a-4ada-a086-b199c153c940)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.504311 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" event={"ID":"9edb0d45-28ef-4cd7-8a24-c720c2d23382","Type":"ContainerStarted","Data":"5caa6fff6cefebcac09ae3e54f2bf8c2358194653265a02b7964103f25fda222"} Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.504360 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" event={"ID":"9edb0d45-28ef-4cd7-8a24-c720c2d23382","Type":"ContainerStarted","Data":"43b7965e0c9d9b666eb147fcee70339b91cee565f35e63b43715825bc2597559"} Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.504372 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" event={"ID":"9edb0d45-28ef-4cd7-8a24-c720c2d23382","Type":"ContainerStarted","Data":"24046ed963e01dc6084c0b9034e87dcd401b97cf07f0752f3c73f7ae129913c6"} Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.519051 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2925759f7eaa0571016fd3cbdd0c6610eab5587330fa591578a773900b689cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"message\\\":\\\"topping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:03:07.549655 6190 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 13:03:07.549669 6190 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 13:03:07.549698 6190 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 13:03:07.549703 6190 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 13:03:07.549707 6190 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 13:03:07.549714 6190 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 13:03:07.549719 6190 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 13:03:07.549727 6190 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 13:03:07.549734 6190 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 13:03:07.549745 6190 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 13:03:07.549749 6190 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 13:03:07.549769 6190 factory.go:656] Stopping watch factory\\\\nI0202 13:03:07.549786 6190 ovnkube.go:599] Stopped ovnkube\\\\nI0202 13:03:07.549802 6190 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 13:03:07.549806 6190 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"itional-cni-plugins-rplmq\\\\nI0202 13:03:09.458877 6396 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:03:09.458895 6396 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 13:03:09.458906 6396 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occur\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.532294 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.543323 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.556910 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.572063 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.582462 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.593313 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9edb0d45-28ef-4cd7-8a24-c720c2d23382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2cq7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.602952 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.602990 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.603000 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.603015 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.603025 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:09Z","lastTransitionTime":"2026-02-02T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.605342 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.619042 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.630359 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.640044 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.650743 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.660628 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.672104 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.687013 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.697233 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.705228 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.705267 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.705278 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.705291 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.705302 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:09Z","lastTransitionTime":"2026-02-02T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.713664 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2925759f7eaa0571016fd3cbdd0c6610eab5587330fa591578a773900b689cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"message\\\":\\\"topping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:03:07.549655 6190 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 13:03:07.549669 6190 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 13:03:07.549698 6190 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 13:03:07.549703 6190 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 13:03:07.549707 6190 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 13:03:07.549714 6190 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 13:03:07.549719 6190 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 13:03:07.549727 6190 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 13:03:07.549734 6190 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 13:03:07.549745 6190 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 13:03:07.549749 6190 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 13:03:07.549769 6190 factory.go:656] Stopping watch factory\\\\nI0202 13:03:07.549786 6190 ovnkube.go:599] Stopped ovnkube\\\\nI0202 13:03:07.549802 6190 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 13:03:07.549806 6190 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"itional-cni-plugins-rplmq\\\\nI0202 13:03:09.458877 6396 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:03:09.458895 6396 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 13:03:09.458906 6396 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occur\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.716295 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:09 crc kubenswrapper[4955]: E0202 13:03:09.716430 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.726677 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.739491 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.751230 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.760414 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.773998 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9edb0d45-28ef-4cd7-8a24-c720c2d23382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43b7965e0c9d9b666eb147fcee70339b91cee565f35e63b43715825bc2597559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5caa6fff6cefebcac09ae3e54f2bf8c2358194653265a02b7964103f25fda222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2cq7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.787230 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.805340 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.806920 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.806960 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.806970 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.806985 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.806995 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:09Z","lastTransitionTime":"2026-02-02T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.816363 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.826119 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.835052 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.844237 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.860031 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2925759f7eaa0571016fd3cbdd0c6610eab5587330fa591578a773900b689cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"message\\\":\\\"topping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:03:07.549655 6190 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 13:03:07.549669 6190 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 13:03:07.549698 6190 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 13:03:07.549703 6190 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 13:03:07.549707 6190 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 13:03:07.549714 6190 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 13:03:07.549719 6190 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 13:03:07.549727 6190 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 13:03:07.549734 6190 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 13:03:07.549745 6190 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 13:03:07.549749 6190 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 13:03:07.549769 6190 factory.go:656] Stopping watch factory\\\\nI0202 13:03:07.549786 6190 ovnkube.go:599] Stopped ovnkube\\\\nI0202 13:03:07.549802 6190 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 13:03:07.549806 6190 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"itional-cni-plugins-rplmq\\\\nI0202 13:03:09.458877 6396 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:03:09.458895 6396 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 13:03:09.458906 6396 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occur\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.872433 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.882682 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.895104 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.906372 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.908717 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-hjcmj"] Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.909078 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:09 crc kubenswrapper[4955]: E0202 13:03:09.909131 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.910831 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.910870 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.910880 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.910899 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.910908 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:09Z","lastTransitionTime":"2026-02-02T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.915222 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.924488 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9edb0d45-28ef-4cd7-8a24-c720c2d23382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43b7965e0c9d9b666eb147fcee70339b91cee565f35e63b43715825bc2597559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5caa6fff6cefebcac09ae3e54f2bf8c2358194653265a02b7964103f25fda222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2cq7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.935532 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.935863 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs\") pod \"network-metrics-daemon-hjcmj\" (UID: \"009c80d7-da9c-46cc-b0d2-570de04e6510\") " pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.936089 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5mnn\" (UniqueName: \"kubernetes.io/projected/009c80d7-da9c-46cc-b0d2-570de04e6510-kube-api-access-f5mnn\") pod \"network-metrics-daemon-hjcmj\" (UID: \"009c80d7-da9c-46cc-b0d2-570de04e6510\") " pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.948105 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.957806 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.967747 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.975862 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.987208 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:09 crc kubenswrapper[4955]: I0202 13:03:09.997058 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.012427 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2925759f7eaa0571016fd3cbdd0c6610eab5587330fa591578a773900b689cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"message\\\":\\\"topping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:03:07.549655 6190 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 13:03:07.549669 6190 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 13:03:07.549698 6190 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 13:03:07.549703 6190 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 13:03:07.549707 6190 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 13:03:07.549714 6190 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 13:03:07.549719 6190 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 13:03:07.549727 6190 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 13:03:07.549734 6190 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 13:03:07.549745 6190 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 13:03:07.549749 6190 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 13:03:07.549769 6190 factory.go:656] Stopping watch factory\\\\nI0202 13:03:07.549786 6190 ovnkube.go:599] Stopped ovnkube\\\\nI0202 13:03:07.549802 6190 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 13:03:07.549806 6190 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"itional-cni-plugins-rplmq\\\\nI0202 13:03:09.458877 6396 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:03:09.458895 6396 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 13:03:09.458906 6396 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occur\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.012937 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.012960 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.012970 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.012984 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.012996 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:10Z","lastTransitionTime":"2026-02-02T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.022266 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjcmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009c80d7-da9c-46cc-b0d2-570de04e6510\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjcmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.033848 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9edb0d45-28ef-4cd7-8a24-c720c2d23382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43b7965e0c9d9b666eb147fcee70339b91cee565f35e63b43715825bc2597559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5caa6fff6cefebcac09ae3e54f2bf8c2358194653265a02b7964103f25fda222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2cq7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.036954 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs\") pod \"network-metrics-daemon-hjcmj\" (UID: \"009c80d7-da9c-46cc-b0d2-570de04e6510\") " pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.037015 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5mnn\" (UniqueName: \"kubernetes.io/projected/009c80d7-da9c-46cc-b0d2-570de04e6510-kube-api-access-f5mnn\") pod \"network-metrics-daemon-hjcmj\" (UID: \"009c80d7-da9c-46cc-b0d2-570de04e6510\") " pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:10 crc kubenswrapper[4955]: E0202 13:03:10.037155 4955 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:03:10 crc kubenswrapper[4955]: E0202 13:03:10.037248 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs podName:009c80d7-da9c-46cc-b0d2-570de04e6510 nodeName:}" failed. No retries permitted until 2026-02-02 13:03:10.537227479 +0000 UTC m=+41.449563929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs") pod "network-metrics-daemon-hjcmj" (UID: "009c80d7-da9c-46cc-b0d2-570de04e6510") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.047215 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.053082 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5mnn\" (UniqueName: \"kubernetes.io/projected/009c80d7-da9c-46cc-b0d2-570de04e6510-kube-api-access-f5mnn\") pod \"network-metrics-daemon-hjcmj\" (UID: \"009c80d7-da9c-46cc-b0d2-570de04e6510\") " pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.060396 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.098946 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.115271 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.115334 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.115348 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.115363 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.115395 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:10Z","lastTransitionTime":"2026-02-02T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.139099 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.177922 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.217735 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.217777 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.217789 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.217840 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.217854 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:10Z","lastTransitionTime":"2026-02-02T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.218163 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.260743 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.299617 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.320349 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.320386 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.320431 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.320454 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.320469 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:10Z","lastTransitionTime":"2026-02-02T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.369983 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.379946 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.419772 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.423041 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.423094 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.423104 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.423118 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.423127 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:10Z","lastTransitionTime":"2026-02-02T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.438546 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 11:47:52.689639555 +0000 UTC Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.458054 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.509180 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z2cps_e0d35d22-ea6a-4ada-a086-b199c153c940/ovnkube-controller/1.log" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.525353 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.525393 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.525406 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.525422 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.525434 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:10Z","lastTransitionTime":"2026-02-02T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.564498 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs\") pod \"network-metrics-daemon-hjcmj\" (UID: \"009c80d7-da9c-46cc-b0d2-570de04e6510\") " pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:10 crc kubenswrapper[4955]: E0202 13:03:10.564806 4955 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:03:10 crc kubenswrapper[4955]: E0202 13:03:10.564895 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs podName:009c80d7-da9c-46cc-b0d2-570de04e6510 nodeName:}" failed. No retries permitted until 2026-02-02 13:03:11.564863628 +0000 UTC m=+42.477200078 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs") pod "network-metrics-daemon-hjcmj" (UID: "009c80d7-da9c-46cc-b0d2-570de04e6510") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.632705 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.632747 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.632755 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.632771 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.632779 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:10Z","lastTransitionTime":"2026-02-02T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.715798 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.715828 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:10 crc kubenswrapper[4955]: E0202 13:03:10.715923 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:10 crc kubenswrapper[4955]: E0202 13:03:10.716020 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.735025 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.735070 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.735082 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.735097 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.735110 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:10Z","lastTransitionTime":"2026-02-02T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.837003 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.837043 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.837059 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.837074 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.837087 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:10Z","lastTransitionTime":"2026-02-02T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.938777 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.938817 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.938829 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.938843 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:10 crc kubenswrapper[4955]: I0202 13:03:10.938854 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:10Z","lastTransitionTime":"2026-02-02T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.041121 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.041175 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.041190 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.041204 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.041214 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:11Z","lastTransitionTime":"2026-02-02T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.143675 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.143738 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.143756 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.143778 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.143794 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:11Z","lastTransitionTime":"2026-02-02T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.246772 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.246830 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.246902 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.246928 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.246994 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:11Z","lastTransitionTime":"2026-02-02T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.348916 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.348957 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.348969 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.348985 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.348998 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:11Z","lastTransitionTime":"2026-02-02T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.439608 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 18:05:07.801538477 +0000 UTC Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.451441 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.451482 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.451497 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.451514 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.451526 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:11Z","lastTransitionTime":"2026-02-02T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.553374 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.553419 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.553430 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.553445 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.553454 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:11Z","lastTransitionTime":"2026-02-02T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.572085 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs\") pod \"network-metrics-daemon-hjcmj\" (UID: \"009c80d7-da9c-46cc-b0d2-570de04e6510\") " pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:11 crc kubenswrapper[4955]: E0202 13:03:11.572232 4955 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:03:11 crc kubenswrapper[4955]: E0202 13:03:11.572302 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs podName:009c80d7-da9c-46cc-b0d2-570de04e6510 nodeName:}" failed. No retries permitted until 2026-02-02 13:03:13.572281608 +0000 UTC m=+44.484618098 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs") pod "network-metrics-daemon-hjcmj" (UID: "009c80d7-da9c-46cc-b0d2-570de04e6510") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.655641 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.655684 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.655697 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.655714 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.655726 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:11Z","lastTransitionTime":"2026-02-02T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.716084 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:11 crc kubenswrapper[4955]: E0202 13:03:11.716257 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.716112 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:11 crc kubenswrapper[4955]: E0202 13:03:11.716670 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.758540 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.758629 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.758647 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.758669 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.758687 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:11Z","lastTransitionTime":"2026-02-02T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.861489 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.861533 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.861546 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.861581 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.861594 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:11Z","lastTransitionTime":"2026-02-02T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.964062 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.964107 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.964122 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.964143 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:11 crc kubenswrapper[4955]: I0202 13:03:11.964159 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:11Z","lastTransitionTime":"2026-02-02T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.067059 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.067113 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.067134 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.067149 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.067161 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:12Z","lastTransitionTime":"2026-02-02T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.168989 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.169030 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.169042 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.169060 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.169073 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:12Z","lastTransitionTime":"2026-02-02T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.271773 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.271845 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.271870 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.271899 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.271918 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:12Z","lastTransitionTime":"2026-02-02T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.373590 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.373629 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.373645 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.373666 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.373681 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:12Z","lastTransitionTime":"2026-02-02T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.440090 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 23:14:26.487518316 +0000 UTC Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.476164 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.476210 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.476222 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.476241 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.476258 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:12Z","lastTransitionTime":"2026-02-02T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.578409 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.578456 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.578469 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.578486 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.578498 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:12Z","lastTransitionTime":"2026-02-02T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.680255 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.680296 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.680306 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.680319 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.680328 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:12Z","lastTransitionTime":"2026-02-02T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.715644 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.715644 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:12 crc kubenswrapper[4955]: E0202 13:03:12.715759 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:12 crc kubenswrapper[4955]: E0202 13:03:12.715822 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.782237 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.782281 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.782293 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.782308 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.782319 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:12Z","lastTransitionTime":"2026-02-02T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.899175 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.899253 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.899269 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.899327 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:12 crc kubenswrapper[4955]: I0202 13:03:12.899346 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:12Z","lastTransitionTime":"2026-02-02T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.002970 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.003011 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.003020 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.003034 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.003044 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:13Z","lastTransitionTime":"2026-02-02T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.105184 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.105239 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.105255 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.105276 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.105292 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:13Z","lastTransitionTime":"2026-02-02T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.207171 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.207219 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.207232 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.207247 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.207262 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:13Z","lastTransitionTime":"2026-02-02T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.309227 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.309261 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.309271 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.309296 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.309304 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:13Z","lastTransitionTime":"2026-02-02T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.411207 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.411247 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.411255 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.411268 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.411277 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:13Z","lastTransitionTime":"2026-02-02T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.440991 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:08:11.273211977 +0000 UTC Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.514257 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.514351 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.514386 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.514431 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.514456 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:13Z","lastTransitionTime":"2026-02-02T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.604430 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs\") pod \"network-metrics-daemon-hjcmj\" (UID: \"009c80d7-da9c-46cc-b0d2-570de04e6510\") " pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:13 crc kubenswrapper[4955]: E0202 13:03:13.604676 4955 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:03:13 crc kubenswrapper[4955]: E0202 13:03:13.604767 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs podName:009c80d7-da9c-46cc-b0d2-570de04e6510 nodeName:}" failed. No retries permitted until 2026-02-02 13:03:17.604746004 +0000 UTC m=+48.517082464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs") pod "network-metrics-daemon-hjcmj" (UID: "009c80d7-da9c-46cc-b0d2-570de04e6510") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.620599 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.620664 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.620674 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.620814 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.621023 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:13Z","lastTransitionTime":"2026-02-02T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.715960 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:13 crc kubenswrapper[4955]: E0202 13:03:13.716139 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.716177 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:13 crc kubenswrapper[4955]: E0202 13:03:13.716507 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.716859 4955 scope.go:117] "RemoveContainer" containerID="b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.723722 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.723748 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.723758 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.723769 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.723777 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:13Z","lastTransitionTime":"2026-02-02T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.826219 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.826279 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.826294 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.826317 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.826346 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:13Z","lastTransitionTime":"2026-02-02T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.929355 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.929403 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.929419 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.929438 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:13 crc kubenswrapper[4955]: I0202 13:03:13.929454 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:13Z","lastTransitionTime":"2026-02-02T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.031757 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.031785 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.031793 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.031807 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.031816 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:14Z","lastTransitionTime":"2026-02-02T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.133885 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.133929 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.133947 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.133967 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.133982 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:14Z","lastTransitionTime":"2026-02-02T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.235812 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.235847 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.235858 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.235873 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.235884 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:14Z","lastTransitionTime":"2026-02-02T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.338752 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.338789 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.338800 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.338818 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.338829 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:14Z","lastTransitionTime":"2026-02-02T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.440986 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.441024 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.441036 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.441055 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.441066 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:14Z","lastTransitionTime":"2026-02-02T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.441145 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 03:27:50.242625758 +0000 UTC Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.525448 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.527053 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42"} Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.528072 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.541472 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.542746 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.542777 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.542788 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.542804 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.542815 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:14Z","lastTransitionTime":"2026-02-02T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.552672 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.563203 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9edb0d45-28ef-4cd7-8a24-c720c2d23382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43b7965e0c9d9b666eb147fcee70339b91cee565f35e63b43715825bc2597559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5caa6fff6cefebcac09ae3e54f2bf8c2358194653265a02b7964103f25fda222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2cq7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.577972 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.592851 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.607727 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.621164 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.631102 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.640536 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.644830 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.644857 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.644866 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.644882 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.644893 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:14Z","lastTransitionTime":"2026-02-02T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.655292 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.666077 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.677476 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.689423 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.704138 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2925759f7eaa0571016fd3cbdd0c6610eab5587330fa591578a773900b689cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"message\\\":\\\"topping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:03:07.549655 6190 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 13:03:07.549669 6190 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 13:03:07.549698 6190 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 13:03:07.549703 6190 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 13:03:07.549707 6190 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 13:03:07.549714 6190 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 13:03:07.549719 6190 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 13:03:07.549727 6190 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 13:03:07.549734 6190 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 13:03:07.549745 6190 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 13:03:07.549749 6190 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 13:03:07.549769 6190 factory.go:656] Stopping watch factory\\\\nI0202 13:03:07.549786 6190 ovnkube.go:599] Stopped ovnkube\\\\nI0202 13:03:07.549802 6190 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 13:03:07.549806 6190 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"itional-cni-plugins-rplmq\\\\nI0202 13:03:09.458877 6396 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:03:09.458895 6396 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 13:03:09.458906 6396 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occur\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.711974 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjcmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009c80d7-da9c-46cc-b0d2-570de04e6510\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjcmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.715774 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:14 crc kubenswrapper[4955]: E0202 13:03:14.715886 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.716228 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:14 crc kubenswrapper[4955]: E0202 13:03:14.716295 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.747441 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.747474 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.747485 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.747502 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.747514 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:14Z","lastTransitionTime":"2026-02-02T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.850318 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.850394 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.850418 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.850450 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.850475 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:14Z","lastTransitionTime":"2026-02-02T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.953409 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.953446 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.953456 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.953469 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:14 crc kubenswrapper[4955]: I0202 13:03:14.953478 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:14Z","lastTransitionTime":"2026-02-02T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.059987 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.060046 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.060058 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.060077 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.060095 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:15Z","lastTransitionTime":"2026-02-02T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.162713 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.162766 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.162779 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.162797 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.162813 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:15Z","lastTransitionTime":"2026-02-02T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.266219 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.266277 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.266290 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.266312 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.266330 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:15Z","lastTransitionTime":"2026-02-02T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.368821 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.368959 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.369017 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.369083 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.369160 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:15Z","lastTransitionTime":"2026-02-02T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.441942 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:08:21.812975298 +0000 UTC Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.471726 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.471778 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.471790 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.471807 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.471816 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:15Z","lastTransitionTime":"2026-02-02T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.574840 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.575169 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.575477 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.575755 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.575905 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:15Z","lastTransitionTime":"2026-02-02T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.673519 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.674533 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.674731 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.674866 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.674992 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:15Z","lastTransitionTime":"2026-02-02T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:15 crc kubenswrapper[4955]: E0202 13:03:15.689203 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:15Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.693005 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.693032 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.693043 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.693055 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.693065 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:15Z","lastTransitionTime":"2026-02-02T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:15 crc kubenswrapper[4955]: E0202 13:03:15.705389 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:15Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.708757 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.708794 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.708807 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.708823 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.708835 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:15Z","lastTransitionTime":"2026-02-02T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.715736 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.715744 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:15 crc kubenswrapper[4955]: E0202 13:03:15.715835 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:15 crc kubenswrapper[4955]: E0202 13:03:15.715943 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:15 crc kubenswrapper[4955]: E0202 13:03:15.722026 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:15Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.725692 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.725733 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.725746 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.725760 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.725773 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:15Z","lastTransitionTime":"2026-02-02T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:15 crc kubenswrapper[4955]: E0202 13:03:15.736062 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:15Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.739155 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.739195 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.739207 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.739224 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.739237 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:15Z","lastTransitionTime":"2026-02-02T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:15 crc kubenswrapper[4955]: E0202 13:03:15.751267 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:15Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:15 crc kubenswrapper[4955]: E0202 13:03:15.751420 4955 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.752784 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.752815 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.752825 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.752839 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.752853 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:15Z","lastTransitionTime":"2026-02-02T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.854926 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.854969 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.854981 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.854994 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.855004 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:15Z","lastTransitionTime":"2026-02-02T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.957331 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.957376 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.957387 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.957410 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:15 crc kubenswrapper[4955]: I0202 13:03:15.957422 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:15Z","lastTransitionTime":"2026-02-02T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.059807 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.059879 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.059901 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.059925 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.059947 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:16Z","lastTransitionTime":"2026-02-02T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.163141 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.163627 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.163838 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.164043 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.164232 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:16Z","lastTransitionTime":"2026-02-02T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.266909 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.266958 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.266973 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.266995 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.267009 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:16Z","lastTransitionTime":"2026-02-02T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.371794 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.371884 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.371912 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.371949 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.371981 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:16Z","lastTransitionTime":"2026-02-02T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.442276 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 22:24:11.479265977 +0000 UTC Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.475053 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.475092 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.475103 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.475120 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.475133 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:16Z","lastTransitionTime":"2026-02-02T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.581111 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.581152 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.581167 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.581190 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.581205 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:16Z","lastTransitionTime":"2026-02-02T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.684003 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.684051 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.684064 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.684082 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.684096 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:16Z","lastTransitionTime":"2026-02-02T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.715412 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:16 crc kubenswrapper[4955]: E0202 13:03:16.715572 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.715644 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:16 crc kubenswrapper[4955]: E0202 13:03:16.715700 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.786796 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.787077 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.787201 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.787302 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.787394 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:16Z","lastTransitionTime":"2026-02-02T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.889636 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.889862 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.889929 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.889998 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.890054 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:16Z","lastTransitionTime":"2026-02-02T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.992296 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.992336 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.992349 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.992367 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:16 crc kubenswrapper[4955]: I0202 13:03:16.992380 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:16Z","lastTransitionTime":"2026-02-02T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.094249 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.094461 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.094565 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.094645 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.094776 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:17Z","lastTransitionTime":"2026-02-02T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.197499 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.197528 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.197540 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.197579 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.197591 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:17Z","lastTransitionTime":"2026-02-02T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.299938 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.299991 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.300003 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.300022 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.300034 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:17Z","lastTransitionTime":"2026-02-02T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.402330 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.402378 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.402391 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.402406 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.402418 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:17Z","lastTransitionTime":"2026-02-02T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.443277 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 14:11:50.312430897 +0000 UTC Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.505235 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.505275 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.505290 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.505310 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.505371 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:17Z","lastTransitionTime":"2026-02-02T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.607430 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.607482 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.607498 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.607517 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.607530 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:17Z","lastTransitionTime":"2026-02-02T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.641371 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs\") pod \"network-metrics-daemon-hjcmj\" (UID: \"009c80d7-da9c-46cc-b0d2-570de04e6510\") " pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:17 crc kubenswrapper[4955]: E0202 13:03:17.641612 4955 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:03:17 crc kubenswrapper[4955]: E0202 13:03:17.641894 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs podName:009c80d7-da9c-46cc-b0d2-570de04e6510 nodeName:}" failed. No retries permitted until 2026-02-02 13:03:25.641870469 +0000 UTC m=+56.554206939 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs") pod "network-metrics-daemon-hjcmj" (UID: "009c80d7-da9c-46cc-b0d2-570de04e6510") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.710286 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.710333 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.710346 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.710361 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.710372 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:17Z","lastTransitionTime":"2026-02-02T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.715647 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.715711 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:17 crc kubenswrapper[4955]: E0202 13:03:17.715753 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:17 crc kubenswrapper[4955]: E0202 13:03:17.715852 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.812879 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.812923 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.812937 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.812955 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.812970 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:17Z","lastTransitionTime":"2026-02-02T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.916293 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.916336 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.916348 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.916365 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:17 crc kubenswrapper[4955]: I0202 13:03:17.916377 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:17Z","lastTransitionTime":"2026-02-02T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.019090 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.019136 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.019148 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.019166 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.019180 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:18Z","lastTransitionTime":"2026-02-02T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.122717 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.122803 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.122832 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.122862 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.122885 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:18Z","lastTransitionTime":"2026-02-02T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.225691 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.225728 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.225740 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.225758 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.225771 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:18Z","lastTransitionTime":"2026-02-02T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.328129 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.328161 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.328173 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.328188 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.328198 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:18Z","lastTransitionTime":"2026-02-02T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.431091 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.431138 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.431150 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.431172 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.431189 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:18Z","lastTransitionTime":"2026-02-02T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.443730 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:30:52.653427345 +0000 UTC Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.533142 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.533189 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.533203 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.533221 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.533235 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:18Z","lastTransitionTime":"2026-02-02T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.635866 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.635913 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.635926 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.635946 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.635960 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:18Z","lastTransitionTime":"2026-02-02T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.716123 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.716217 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:18 crc kubenswrapper[4955]: E0202 13:03:18.716292 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:18 crc kubenswrapper[4955]: E0202 13:03:18.716356 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.738898 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.738941 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.738951 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.738968 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.738997 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:18Z","lastTransitionTime":"2026-02-02T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.841246 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.841307 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.841320 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.841343 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.841356 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:18Z","lastTransitionTime":"2026-02-02T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.944234 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.944326 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.944342 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.944362 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:18 crc kubenswrapper[4955]: I0202 13:03:18.944377 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:18Z","lastTransitionTime":"2026-02-02T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.047582 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.047663 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.047683 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.047706 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.047725 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:19Z","lastTransitionTime":"2026-02-02T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.150845 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.150906 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.150924 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.150948 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.150966 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:19Z","lastTransitionTime":"2026-02-02T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.254181 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.254239 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.254254 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.254276 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.254292 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:19Z","lastTransitionTime":"2026-02-02T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.356531 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.356590 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.356600 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.356618 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.356627 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:19Z","lastTransitionTime":"2026-02-02T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.444727 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 13:58:19.582547792 +0000 UTC Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.459024 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.459079 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.459103 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.459126 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.459143 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:19Z","lastTransitionTime":"2026-02-02T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.562510 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.562629 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.562655 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.562684 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.562706 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:19Z","lastTransitionTime":"2026-02-02T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.665271 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.665350 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.665375 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.665406 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.665429 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:19Z","lastTransitionTime":"2026-02-02T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.716197 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.716297 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:19 crc kubenswrapper[4955]: E0202 13:03:19.716433 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:19 crc kubenswrapper[4955]: E0202 13:03:19.716688 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.741172 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:19Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.761520 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:19Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.770686 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.770746 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.770864 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.770910 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.770942 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:19Z","lastTransitionTime":"2026-02-02T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.784442 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:19Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.806075 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:19Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.821501 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:19Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.837240 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9edb0d45-28ef-4cd7-8a24-c720c2d23382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43b7965e0c9d9b666eb147fcee70339b91cee565f35e63b43715825bc2597559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5caa6fff6cefebcac09ae3e54f2bf8c2358194653265a02b7964103f25fda222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2cq7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:19Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.854303 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:19Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.866627 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:19Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.873255 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.873287 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.873299 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.873315 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.873327 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:19Z","lastTransitionTime":"2026-02-02T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.878179 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:19Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.888874 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:19Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.926232 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:19Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.945943 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:19Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.961268 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:19Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.975734 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.975770 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.975781 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.975800 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.975814 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:19Z","lastTransitionTime":"2026-02-02T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.981093 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2925759f7eaa0571016fd3cbdd0c6610eab5587330fa591578a773900b689cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"message\\\":\\\"topping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:03:07.549655 6190 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 13:03:07.549669 6190 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 13:03:07.549698 6190 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 13:03:07.549703 6190 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 13:03:07.549707 6190 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 13:03:07.549714 6190 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 13:03:07.549719 6190 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 13:03:07.549727 6190 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 13:03:07.549734 6190 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 13:03:07.549745 6190 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 13:03:07.549749 6190 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 13:03:07.549769 6190 factory.go:656] Stopping watch factory\\\\nI0202 13:03:07.549786 6190 ovnkube.go:599] Stopped ovnkube\\\\nI0202 13:03:07.549802 6190 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 13:03:07.549806 6190 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"itional-cni-plugins-rplmq\\\\nI0202 13:03:09.458877 6396 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:03:09.458895 6396 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 13:03:09.458906 6396 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occur\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:19Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:19 crc kubenswrapper[4955]: I0202 13:03:19.992304 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjcmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009c80d7-da9c-46cc-b0d2-570de04e6510\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjcmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:19Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.078849 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.078899 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.078911 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.078928 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.078939 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:20Z","lastTransitionTime":"2026-02-02T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.180629 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.180698 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.180721 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.180748 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.180766 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:20Z","lastTransitionTime":"2026-02-02T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.272040 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.284093 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.284170 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.284196 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.284232 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.284256 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:20Z","lastTransitionTime":"2026-02-02T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.288934 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.306861 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2925759f7eaa0571016fd3cbdd0c6610eab5587330fa591578a773900b689cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"message\\\":\\\"topping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:03:07.549655 6190 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 13:03:07.549669 6190 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 13:03:07.549698 6190 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 13:03:07.549703 6190 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 13:03:07.549707 6190 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 13:03:07.549714 6190 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 13:03:07.549719 6190 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 13:03:07.549727 6190 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 13:03:07.549734 6190 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 13:03:07.549745 6190 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 13:03:07.549749 6190 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 13:03:07.549769 6190 factory.go:656] Stopping watch factory\\\\nI0202 13:03:07.549786 6190 ovnkube.go:599] Stopped ovnkube\\\\nI0202 13:03:07.549802 6190 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 13:03:07.549806 6190 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"itional-cni-plugins-rplmq\\\\nI0202 13:03:09.458877 6396 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:03:09.458895 6396 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 13:03:09.458906 6396 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occur\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.327840 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjcmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009c80d7-da9c-46cc-b0d2-570de04e6510\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjcmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.350092 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.372902 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.387197 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.387271 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.387300 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.387332 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.387358 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:20Z","lastTransitionTime":"2026-02-02T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.397742 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.415740 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.431175 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9edb0d45-28ef-4cd7-8a24-c720c2d23382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43b7965e0c9d9b666eb147fcee70339b91cee565f35e63b43715825bc2597559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5caa6fff6cefebcac09ae3e54f2bf8c2358194653265a02b7964103f25fda222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2cq7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.445778 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 17:46:02.525728383 +0000 UTC Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.446552 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.464262 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.479360 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.490281 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.490335 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.490355 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.490378 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.490395 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:20Z","lastTransitionTime":"2026-02-02T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.494447 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.507583 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.519804 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.534371 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.548284 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.592865 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.592916 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.592933 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.592955 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.592971 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:20Z","lastTransitionTime":"2026-02-02T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.696212 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.696238 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.696247 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.696261 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.696270 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:20Z","lastTransitionTime":"2026-02-02T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.716232 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.716248 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:20 crc kubenswrapper[4955]: E0202 13:03:20.716497 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:20 crc kubenswrapper[4955]: E0202 13:03:20.716680 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.797836 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.797891 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.797908 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.797929 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.797946 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:20Z","lastTransitionTime":"2026-02-02T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.900970 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.901009 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.901021 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.901037 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:20 crc kubenswrapper[4955]: I0202 13:03:20.901049 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:20Z","lastTransitionTime":"2026-02-02T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.003443 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.003481 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.003494 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.003515 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.003528 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:21Z","lastTransitionTime":"2026-02-02T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.106060 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.106158 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.106177 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.106200 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.106219 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:21Z","lastTransitionTime":"2026-02-02T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.209418 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.209504 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.209523 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.210103 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.210175 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:21Z","lastTransitionTime":"2026-02-02T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.313233 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.313301 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.313360 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.313379 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.313391 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:21Z","lastTransitionTime":"2026-02-02T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.416211 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.416261 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.416276 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.416295 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.416309 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:21Z","lastTransitionTime":"2026-02-02T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.445996 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:08:53.848963397 +0000 UTC Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.519513 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.519625 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.519647 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.519670 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.519688 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:21Z","lastTransitionTime":"2026-02-02T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.622639 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.622709 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.622727 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.622750 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.622767 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:21Z","lastTransitionTime":"2026-02-02T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.687985 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:21 crc kubenswrapper[4955]: E0202 13:03:21.688133 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:53.688104723 +0000 UTC m=+84.600441203 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.688220 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.688267 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.688364 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.688458 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:21 crc kubenswrapper[4955]: E0202 13:03:21.688450 4955 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:03:21 crc kubenswrapper[4955]: E0202 13:03:21.688775 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:03:53.688759088 +0000 UTC m=+84.601095568 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:03:21 crc kubenswrapper[4955]: E0202 13:03:21.688531 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:03:21 crc kubenswrapper[4955]: E0202 13:03:21.688914 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:03:21 crc kubenswrapper[4955]: E0202 13:03:21.688644 4955 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:03:21 crc kubenswrapper[4955]: E0202 13:03:21.688944 4955 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:03:21 crc kubenswrapper[4955]: E0202 13:03:21.688652 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:03:21 crc kubenswrapper[4955]: E0202 13:03:21.689027 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:03:53.688998913 +0000 UTC m=+84.601335403 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:03:21 crc kubenswrapper[4955]: E0202 13:03:21.689042 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:03:21 crc kubenswrapper[4955]: E0202 13:03:21.689056 4955 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:03:21 crc kubenswrapper[4955]: E0202 13:03:21.689057 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 13:03:53.689043854 +0000 UTC m=+84.601380344 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:03:21 crc kubenswrapper[4955]: E0202 13:03:21.689117 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 13:03:53.689091135 +0000 UTC m=+84.601427615 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.716492 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.716544 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:21 crc kubenswrapper[4955]: E0202 13:03:21.716753 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:21 crc kubenswrapper[4955]: E0202 13:03:21.716952 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.727624 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.727704 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.727733 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.727791 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.727829 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:21Z","lastTransitionTime":"2026-02-02T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.830707 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.830779 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.830805 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.830836 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.830858 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:21Z","lastTransitionTime":"2026-02-02T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.933289 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.933325 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.933335 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.933346 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:21 crc kubenswrapper[4955]: I0202 13:03:21.933355 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:21Z","lastTransitionTime":"2026-02-02T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.036812 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.036847 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.036858 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.036872 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.036881 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:22Z","lastTransitionTime":"2026-02-02T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.139900 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.139963 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.139980 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.140003 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.140034 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:22Z","lastTransitionTime":"2026-02-02T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.243765 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.243832 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.243849 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.243879 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.243899 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:22Z","lastTransitionTime":"2026-02-02T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.347978 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.348049 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.348071 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.348100 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.348125 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:22Z","lastTransitionTime":"2026-02-02T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.447115 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 03:11:26.958040063 +0000 UTC Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.450737 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.450780 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.450798 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.450823 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.450839 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:22Z","lastTransitionTime":"2026-02-02T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.553034 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.553080 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.553092 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.553109 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.553121 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:22Z","lastTransitionTime":"2026-02-02T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.656322 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.656369 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.656384 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.656404 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.656419 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:22Z","lastTransitionTime":"2026-02-02T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.715532 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.715707 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:22 crc kubenswrapper[4955]: E0202 13:03:22.715819 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:22 crc kubenswrapper[4955]: E0202 13:03:22.715946 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.759198 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.759276 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.759288 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.759306 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.759318 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:22Z","lastTransitionTime":"2026-02-02T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.863091 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.863148 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.863167 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.863192 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.863211 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:22Z","lastTransitionTime":"2026-02-02T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.965631 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.965666 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.965690 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.965713 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:22 crc kubenswrapper[4955]: I0202 13:03:22.965728 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:22Z","lastTransitionTime":"2026-02-02T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.068161 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.068208 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.068217 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.068233 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.068242 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:23Z","lastTransitionTime":"2026-02-02T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.102382 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.116519 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.127302 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.137583 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.152161 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5adc136c-fa74-4369-9f38-1ba52de4ebab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c60e92cee9bd3bccac68de0215ea5cca98cc73f4824943bc418033b72bc4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a731059d0e0d6b7626697d820885c68f344e9194d29cc6fe407b3946dfb2533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75d7d3d3bf40facfdba3d7ca2a10e9c7df2be89e1b605ab6c70ae252978623f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.167174 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.170981 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.171013 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.171023 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.171037 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.171046 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:23Z","lastTransitionTime":"2026-02-02T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.175172 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.175845 4955 scope.go:117] "RemoveContainer" containerID="f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.178908 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.190071 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.199949 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.217143 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2925759f7eaa0571016fd3cbdd0c6610eab5587330fa591578a773900b689cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"message\\\":\\\"topping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:03:07.549655 6190 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 13:03:07.549669 6190 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 13:03:07.549698 6190 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 13:03:07.549703 6190 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 13:03:07.549707 6190 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 13:03:07.549714 6190 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 13:03:07.549719 6190 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 13:03:07.549727 6190 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 13:03:07.549734 6190 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 13:03:07.549745 6190 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 13:03:07.549749 6190 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 13:03:07.549769 6190 factory.go:656] Stopping watch factory\\\\nI0202 13:03:07.549786 6190 ovnkube.go:599] Stopped ovnkube\\\\nI0202 13:03:07.549802 6190 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 13:03:07.549806 6190 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"itional-cni-plugins-rplmq\\\\nI0202 13:03:09.458877 6396 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:03:09.458895 6396 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 13:03:09.458906 6396 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occur\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.229420 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjcmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009c80d7-da9c-46cc-b0d2-570de04e6510\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjcmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.244101 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.254939 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.268152 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9edb0d45-28ef-4cd7-8a24-c720c2d23382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43b7965e0c9d9b666eb147fcee70339b91cee565f35e63b43715825bc2597559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5caa6fff6cefebcac09ae3e54f2bf8c2358194653265a02b7964103f25fda222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2cq7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.274021 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.274080 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.274105 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.274128 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.274137 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:23Z","lastTransitionTime":"2026-02-02T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.285497 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.302440 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.318325 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.335052 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"itional-cni-plugins-rplmq\\\\nI0202 13:03:09.458877 6396 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:03:09.458895 6396 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 13:03:09.458906 6396 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occur\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z2cps_openshift-ovn-kubernetes(e0d35d22-ea6a-4ada-a086-b199c153c940)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.344977 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjcmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009c80d7-da9c-46cc-b0d2-570de04e6510\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjcmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.356445 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9edb0d45-28ef-4cd7-8a24-c720c2d23382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43b7965e0c9d9b666eb147fcee70339b91cee565f35e63b43715825bc2597559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5caa6fff6cefebcac09ae3e54f2bf8c2358194653265a02b7964103f25fda222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2cq7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.369652 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.377035 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.377077 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.377086 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.377101 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.377111 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:23Z","lastTransitionTime":"2026-02-02T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.381227 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.395667 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.409666 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.419903 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.429312 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.439633 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5adc136c-fa74-4369-9f38-1ba52de4ebab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c60e92cee9bd3bccac68de0215ea5cca98cc73f4824943bc418033b72bc4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a731059d0e0d6b7626697d820885c68f344e9194d29cc6fe407b3946dfb2533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75d7d3d3bf40facfdba3d7ca2a10e9c7df2be89e1b605ab6c70ae252978623f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.447967 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 11:32:24.387873303 +0000 UTC Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.451350 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.464349 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.475090 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.479143 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.479180 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.479190 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.479204 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.479213 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:23Z","lastTransitionTime":"2026-02-02T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.496829 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.512107 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.524398 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.557993 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z2cps_e0d35d22-ea6a-4ada-a086-b199c153c940/ovnkube-controller/1.log" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.560623 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerStarted","Data":"c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9"} Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.561089 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.580708 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.580755 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.580765 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.580780 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.580791 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:23Z","lastTransitionTime":"2026-02-02T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.583351 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.594899 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.611964 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"itional-cni-plugins-rplmq\\\\nI0202 13:03:09.458877 6396 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:03:09.458895 6396 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 13:03:09.458906 6396 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occur\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.622848 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjcmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009c80d7-da9c-46cc-b0d2-570de04e6510\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjcmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.634506 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.645295 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.657439 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.669127 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.679856 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.683095 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.683152 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.683174 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.683190 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.683201 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:23Z","lastTransitionTime":"2026-02-02T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.692090 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9edb0d45-28ef-4cd7-8a24-c720c2d23382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43b7965e0c9d9b666eb147fcee70339b91cee565f35e63b43715825bc2597559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5caa6fff6cefebcac09ae3e54f2bf8c2358194653265a02b7964103f25fda222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2cq7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.704037 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5adc136c-fa74-4369-9f38-1ba52de4ebab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c60e92cee9bd3bccac68de0215ea5cca98cc73f4824943bc418033b72bc4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a731059d0e0d6b7626697d820885c68f344e9194d29cc6fe407b3946dfb2533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75d7d3d3bf40facfdba3d7ca2a10e9c7df2be89e1b605ab6c70ae252978623f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.715349 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.715404 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:23 crc kubenswrapper[4955]: E0202 13:03:23.715483 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:23 crc kubenswrapper[4955]: E0202 13:03:23.715628 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.716718 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.729080 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.740427 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.748310 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.759516 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.784953 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.784992 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.785002 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.785017 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.785030 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:23Z","lastTransitionTime":"2026-02-02T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.887237 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.887282 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.887293 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.887311 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.887321 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:23Z","lastTransitionTime":"2026-02-02T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.990267 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.990616 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.990627 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.990664 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:23 crc kubenswrapper[4955]: I0202 13:03:23.990683 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:23Z","lastTransitionTime":"2026-02-02T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.093324 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.093382 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.093396 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.093414 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.093425 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:24Z","lastTransitionTime":"2026-02-02T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.195940 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.196008 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.196028 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.196052 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.196070 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:24Z","lastTransitionTime":"2026-02-02T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.298700 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.298768 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.298788 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.298811 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.298828 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:24Z","lastTransitionTime":"2026-02-02T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.401540 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.401604 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.401616 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.401630 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.401639 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:24Z","lastTransitionTime":"2026-02-02T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.448726 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 02:19:46.363222136 +0000 UTC Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.503739 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.503773 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.503782 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.503795 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.503805 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:24Z","lastTransitionTime":"2026-02-02T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.565400 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z2cps_e0d35d22-ea6a-4ada-a086-b199c153c940/ovnkube-controller/2.log" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.566269 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z2cps_e0d35d22-ea6a-4ada-a086-b199c153c940/ovnkube-controller/1.log" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.570640 4955 generic.go:334] "Generic (PLEG): container finished" podID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerID="c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9" exitCode=1 Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.570716 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerDied","Data":"c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9"} Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.570806 4955 scope.go:117] "RemoveContainer" containerID="f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.571865 4955 scope.go:117] "RemoveContainer" containerID="c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9" Feb 02 13:03:24 crc kubenswrapper[4955]: E0202 13:03:24.572114 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z2cps_openshift-ovn-kubernetes(e0d35d22-ea6a-4ada-a086-b199c153c940)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.597084 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f04f5cfd2d25310731a0f4978f20e7f3f17f3615e8e22b921ad3eaeacfb7c71b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"itional-cni-plugins-rplmq\\\\nI0202 13:03:09.458877 6396 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:03:09.458895 6396 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 13:03:09.458906 6396 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occur\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"message\\\":\\\"nil)\\\\nI0202 13:03:23.910950 6621 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-hjcmj\\\\nI0202 13:03:23.910957 6621 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-hjcmj\\\\nF0202 13:03:23.910587 6621 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z]\\\\nI0202 13:03:23.910961 6621 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0202 13:03:23.910941 6621 ovn.go:134] Ensuring zone local for Pod \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.606178 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.606216 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.606227 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.606243 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.606255 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:24Z","lastTransitionTime":"2026-02-02T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.609092 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjcmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009c80d7-da9c-46cc-b0d2-570de04e6510\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjcmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.628144 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.652514 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.670420 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.685190 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.697634 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9edb0d45-28ef-4cd7-8a24-c720c2d23382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43b7965e0c9d9b666eb147fcee70339b91cee565f35e63b43715825bc2597559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5caa6fff6cefebcac09ae3e54f2bf8c2358194653265a02b7964103f25fda222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2cq7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.708384 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.708436 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.708457 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.708484 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.708501 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:24Z","lastTransitionTime":"2026-02-02T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.715046 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.715390 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.715391 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:24 crc kubenswrapper[4955]: E0202 13:03:24.715495 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:24 crc kubenswrapper[4955]: E0202 13:03:24.715588 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.729583 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.745117 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.760718 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.771843 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.782588 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.794522 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5adc136c-fa74-4369-9f38-1ba52de4ebab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c60e92cee9bd3bccac68de0215ea5cca98cc73f4824943bc418033b72bc4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a731059d0e0d6b7626697d820885c68f344e9194d29cc6fe407b3946dfb2533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75d7d3d3bf40facfdba3d7ca2a10e9c7df2be89e1b605ab6c70ae252978623f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.807297 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.811761 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.811815 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.811828 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.811845 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.811860 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:24Z","lastTransitionTime":"2026-02-02T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.824202 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.915075 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.915156 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.915181 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.915233 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:24 crc kubenswrapper[4955]: I0202 13:03:24.915258 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:24Z","lastTransitionTime":"2026-02-02T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.018048 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.018148 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.018176 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.018207 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.018230 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:25Z","lastTransitionTime":"2026-02-02T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.121679 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.121732 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.121745 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.121764 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.121777 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:25Z","lastTransitionTime":"2026-02-02T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.224614 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.224722 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.224762 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.224780 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.224793 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:25Z","lastTransitionTime":"2026-02-02T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.327395 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.327438 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.327450 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.327465 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.327482 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:25Z","lastTransitionTime":"2026-02-02T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.430896 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.430968 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.430986 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.431011 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.431028 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:25Z","lastTransitionTime":"2026-02-02T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.448874 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 02:05:01.488583573 +0000 UTC Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.535960 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.536131 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.536196 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.536297 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.536312 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:25Z","lastTransitionTime":"2026-02-02T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.575191 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z2cps_e0d35d22-ea6a-4ada-a086-b199c153c940/ovnkube-controller/2.log" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.578531 4955 scope.go:117] "RemoveContainer" containerID="c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9" Feb 02 13:03:25 crc kubenswrapper[4955]: E0202 13:03:25.578785 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z2cps_openshift-ovn-kubernetes(e0d35d22-ea6a-4ada-a086-b199c153c940)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.591897 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.603871 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.620633 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"message\\\":\\\"nil)\\\\nI0202 13:03:23.910950 6621 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-hjcmj\\\\nI0202 13:03:23.910957 6621 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-hjcmj\\\\nF0202 13:03:23.910587 6621 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z]\\\\nI0202 13:03:23.910961 6621 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0202 13:03:23.910941 6621 ovn.go:134] Ensuring zone local for Pod \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z2cps_openshift-ovn-kubernetes(e0d35d22-ea6a-4ada-a086-b199c153c940)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.632035 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjcmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009c80d7-da9c-46cc-b0d2-570de04e6510\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjcmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.638853 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.638899 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.638912 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.638932 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.638945 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:25Z","lastTransitionTime":"2026-02-02T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.643930 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.656300 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9edb0d45-28ef-4cd7-8a24-c720c2d23382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43b7965e0c9d9b666eb147fcee70339b91cee565f35e63b43715825bc2597559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5caa6fff6cefebcac09ae3e54f2bf8c2358194653265a02b7964103f25fda222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2cq7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.668660 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.680269 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.696018 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.709888 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.716063 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:25 crc kubenswrapper[4955]: E0202 13:03:25.716165 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.716181 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:25 crc kubenswrapper[4955]: E0202 13:03:25.716329 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.719394 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.728484 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs\") pod \"network-metrics-daemon-hjcmj\" (UID: \"009c80d7-da9c-46cc-b0d2-570de04e6510\") " pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:25 crc kubenswrapper[4955]: E0202 13:03:25.728658 4955 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:03:25 crc kubenswrapper[4955]: E0202 13:03:25.728714 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs podName:009c80d7-da9c-46cc-b0d2-570de04e6510 nodeName:}" failed. No retries permitted until 2026-02-02 13:03:41.728701124 +0000 UTC m=+72.641037574 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs") pod "network-metrics-daemon-hjcmj" (UID: "009c80d7-da9c-46cc-b0d2-570de04e6510") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.729657 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.741301 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.741345 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.741358 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.741374 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.741386 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:25Z","lastTransitionTime":"2026-02-02T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.742647 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5adc136c-fa74-4369-9f38-1ba52de4ebab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c60e92cee9bd3bccac68de0215ea5cca98cc73f4824943bc418033b72bc4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a731059d0e0d6b7626697d820885c68f344e9194d29cc6fe407b3946dfb2533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75d7d3d3bf40facfdba3d7ca2a10e9c7df2be89e1b605ab6c70ae252978623f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.757077 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.768322 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.778448 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.843524 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.843645 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.843656 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.843671 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.843679 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:25Z","lastTransitionTime":"2026-02-02T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.945986 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.946039 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.946053 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.946073 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.946091 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:25Z","lastTransitionTime":"2026-02-02T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.953443 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.953479 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.953490 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.953503 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:25 crc kubenswrapper[4955]: I0202 13:03:25.953514 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:25Z","lastTransitionTime":"2026-02-02T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:25 crc kubenswrapper[4955]: E0202 13:03:25.995568 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:25.999962 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:25.999991 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.000000 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.000014 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.000024 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:26Z","lastTransitionTime":"2026-02-02T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:26 crc kubenswrapper[4955]: E0202 13:03:26.013687 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.017343 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.017369 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.017378 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.017390 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.017399 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:26Z","lastTransitionTime":"2026-02-02T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:26 crc kubenswrapper[4955]: E0202 13:03:26.031254 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.035336 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.035369 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.035380 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.035395 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.035406 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:26Z","lastTransitionTime":"2026-02-02T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:26 crc kubenswrapper[4955]: E0202 13:03:26.055387 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.058237 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.058259 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.058395 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.058422 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.058434 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:26Z","lastTransitionTime":"2026-02-02T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:26 crc kubenswrapper[4955]: E0202 13:03:26.073828 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:26 crc kubenswrapper[4955]: E0202 13:03:26.073940 4955 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.075676 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.075715 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.075727 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.075740 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.075751 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:26Z","lastTransitionTime":"2026-02-02T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.178164 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.178208 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.178223 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.178238 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.178251 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:26Z","lastTransitionTime":"2026-02-02T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.281515 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.281580 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.281598 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.281617 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.281627 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:26Z","lastTransitionTime":"2026-02-02T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.384190 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.384236 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.384249 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.384265 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.384277 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:26Z","lastTransitionTime":"2026-02-02T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.449033 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 07:51:59.82190563 +0000 UTC Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.487001 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.487047 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.487058 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.487074 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.487085 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:26Z","lastTransitionTime":"2026-02-02T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.589723 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.589769 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.589783 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.589802 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.589818 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:26Z","lastTransitionTime":"2026-02-02T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.691970 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.692004 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.692013 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.692026 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.692037 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:26Z","lastTransitionTime":"2026-02-02T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.715495 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:26 crc kubenswrapper[4955]: E0202 13:03:26.715682 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.715495 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:26 crc kubenswrapper[4955]: E0202 13:03:26.715999 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.794378 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.795191 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.795318 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.795403 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.795486 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:26Z","lastTransitionTime":"2026-02-02T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.898215 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.898256 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.898266 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.898281 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:26 crc kubenswrapper[4955]: I0202 13:03:26.898293 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:26Z","lastTransitionTime":"2026-02-02T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.001399 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.001447 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.001462 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.001480 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.001492 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:27Z","lastTransitionTime":"2026-02-02T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.103823 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.103864 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.103875 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.103891 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.103900 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:27Z","lastTransitionTime":"2026-02-02T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.206277 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.206349 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.206404 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.206422 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.206434 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:27Z","lastTransitionTime":"2026-02-02T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.308652 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.308899 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.308959 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.309020 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.309081 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:27Z","lastTransitionTime":"2026-02-02T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.411781 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.411824 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.411835 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.411848 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.411857 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:27Z","lastTransitionTime":"2026-02-02T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.449523 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 07:16:17.011383341 +0000 UTC Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.514107 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.514139 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.514147 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.514160 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.514172 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:27Z","lastTransitionTime":"2026-02-02T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.615846 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.615879 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.615893 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.615908 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.615919 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:27Z","lastTransitionTime":"2026-02-02T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.715668 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.715682 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:27 crc kubenswrapper[4955]: E0202 13:03:27.715833 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:27 crc kubenswrapper[4955]: E0202 13:03:27.715919 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.718100 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.718135 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.718145 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.718161 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.718171 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:27Z","lastTransitionTime":"2026-02-02T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.821000 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.821043 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.821052 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.821070 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.821080 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:27Z","lastTransitionTime":"2026-02-02T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.923924 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.923962 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.923972 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.923986 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:27 crc kubenswrapper[4955]: I0202 13:03:27.923996 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:27Z","lastTransitionTime":"2026-02-02T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.026545 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.026604 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.026615 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.026634 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.026646 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:28Z","lastTransitionTime":"2026-02-02T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.128914 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.128964 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.128975 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.128991 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.129003 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:28Z","lastTransitionTime":"2026-02-02T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.231239 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.231278 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.231288 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.231319 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.231329 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:28Z","lastTransitionTime":"2026-02-02T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.333391 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.333434 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.333443 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.333458 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.333467 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:28Z","lastTransitionTime":"2026-02-02T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.435269 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.435299 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.435307 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.435319 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.435328 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:28Z","lastTransitionTime":"2026-02-02T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.449827 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 12:56:13.52167237 +0000 UTC Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.538131 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.538161 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.538170 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.538184 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.538193 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:28Z","lastTransitionTime":"2026-02-02T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.640593 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.640654 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.640666 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.640681 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.640690 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:28Z","lastTransitionTime":"2026-02-02T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.715489 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.715507 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:28 crc kubenswrapper[4955]: E0202 13:03:28.715697 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:28 crc kubenswrapper[4955]: E0202 13:03:28.715758 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.743246 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.743538 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.743665 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.743765 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.743845 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:28Z","lastTransitionTime":"2026-02-02T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.846340 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.846823 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.846905 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.846981 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.847042 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:28Z","lastTransitionTime":"2026-02-02T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.949339 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.949377 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.949386 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.949401 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:28 crc kubenswrapper[4955]: I0202 13:03:28.949410 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:28Z","lastTransitionTime":"2026-02-02T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.051765 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.051805 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.051815 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.051829 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.051838 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:29Z","lastTransitionTime":"2026-02-02T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.154082 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.154120 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.154131 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.154145 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.154155 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:29Z","lastTransitionTime":"2026-02-02T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.257317 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.257354 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.257365 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.257379 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.257388 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:29Z","lastTransitionTime":"2026-02-02T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.360255 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.360321 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.360345 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.360377 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.360402 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:29Z","lastTransitionTime":"2026-02-02T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.450391 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 14:01:34.166785372 +0000 UTC Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.463215 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.463273 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.463288 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.463309 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.463324 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:29Z","lastTransitionTime":"2026-02-02T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.565634 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.565690 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.565704 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.565723 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.565739 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:29Z","lastTransitionTime":"2026-02-02T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.667365 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.667412 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.667425 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.667442 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.667455 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:29Z","lastTransitionTime":"2026-02-02T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.715721 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.715759 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:29 crc kubenswrapper[4955]: E0202 13:03:29.715831 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:29 crc kubenswrapper[4955]: E0202 13:03:29.715995 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.750503 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"message\\\":\\\"nil)\\\\nI0202 13:03:23.910950 6621 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-hjcmj\\\\nI0202 13:03:23.910957 6621 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-hjcmj\\\\nF0202 13:03:23.910587 6621 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z]\\\\nI0202 13:03:23.910961 6621 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0202 13:03:23.910941 6621 ovn.go:134] Ensuring zone local for Pod \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z2cps_openshift-ovn-kubernetes(e0d35d22-ea6a-4ada-a086-b199c153c940)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.761927 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjcmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009c80d7-da9c-46cc-b0d2-570de04e6510\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjcmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.769698 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.769738 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.769751 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.769765 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.769774 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:29Z","lastTransitionTime":"2026-02-02T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.776268 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.790232 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.802659 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.814145 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9edb0d45-28ef-4cd7-8a24-c720c2d23382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43b7965e0c9d9b666eb147fcee70339b91cee565f35e63b43715825bc2597559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5caa6fff6cefebcac09ae3e54f2bf8c2358194653265a02b7964103f25fda222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2cq7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.829019 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.840136 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.853360 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.863338 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.871744 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.871886 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.871899 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.871914 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.871924 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:29Z","lastTransitionTime":"2026-02-02T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.872573 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.884156 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.894671 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5adc136c-fa74-4369-9f38-1ba52de4ebab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c60e92cee9bd3bccac68de0215ea5cca98cc73f4824943bc418033b72bc4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a731059d0e0d6b7626697d820885c68f344e9194d29cc6fe407b3946dfb2533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75d7d3d3bf40facfdba3d7ca2a10e9c7df2be89e1b605ab6c70ae252978623f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.907869 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.920808 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.933280 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.974161 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.974233 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.974271 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.974301 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:29 crc kubenswrapper[4955]: I0202 13:03:29.974333 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:29Z","lastTransitionTime":"2026-02-02T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.077162 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.077188 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.077197 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.077211 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.077220 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:30Z","lastTransitionTime":"2026-02-02T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.179430 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.179471 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.179487 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.179505 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.179521 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:30Z","lastTransitionTime":"2026-02-02T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.282474 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.282515 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.282530 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.282551 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.282594 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:30Z","lastTransitionTime":"2026-02-02T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.385124 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.385164 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.385174 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.385188 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.385197 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:30Z","lastTransitionTime":"2026-02-02T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.450908 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 00:11:05.50090291 +0000 UTC Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.488041 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.488100 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.488121 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.488147 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.488164 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:30Z","lastTransitionTime":"2026-02-02T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.589521 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.589584 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.589599 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.589621 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.589636 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:30Z","lastTransitionTime":"2026-02-02T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.691485 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.691552 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.691597 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.691624 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.691647 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:30Z","lastTransitionTime":"2026-02-02T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.715749 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.715886 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:30 crc kubenswrapper[4955]: E0202 13:03:30.716002 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:30 crc kubenswrapper[4955]: E0202 13:03:30.716245 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.794291 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.794349 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.794360 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.794393 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.794404 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:30Z","lastTransitionTime":"2026-02-02T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.897350 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.897380 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.897390 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.897404 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:30 crc kubenswrapper[4955]: I0202 13:03:30.897413 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:30Z","lastTransitionTime":"2026-02-02T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.000460 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.000531 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.000611 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.000651 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.000674 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:31Z","lastTransitionTime":"2026-02-02T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.103166 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.103218 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.103231 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.103247 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.103258 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:31Z","lastTransitionTime":"2026-02-02T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.206988 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.207072 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.207094 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.207127 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.207145 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:31Z","lastTransitionTime":"2026-02-02T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.310517 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.310595 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.310620 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.310646 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.310661 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:31Z","lastTransitionTime":"2026-02-02T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.415521 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.415621 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.415639 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.415666 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.415683 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:31Z","lastTransitionTime":"2026-02-02T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.451830 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 19:12:56.438147267 +0000 UTC Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.519266 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.519360 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.519390 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.519428 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.519454 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:31Z","lastTransitionTime":"2026-02-02T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.622417 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.622475 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.622489 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.622511 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.622526 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:31Z","lastTransitionTime":"2026-02-02T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.715981 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:31 crc kubenswrapper[4955]: E0202 13:03:31.716187 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.716316 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:31 crc kubenswrapper[4955]: E0202 13:03:31.716624 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.725052 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.725096 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.725107 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.725123 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.725141 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:31Z","lastTransitionTime":"2026-02-02T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.827936 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.828405 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.828418 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.828438 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.828451 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:31Z","lastTransitionTime":"2026-02-02T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.932296 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.932379 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.932394 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.932421 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:31 crc kubenswrapper[4955]: I0202 13:03:31.932438 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:31Z","lastTransitionTime":"2026-02-02T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.053251 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.053301 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.053312 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.053330 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.053340 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:32Z","lastTransitionTime":"2026-02-02T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.155203 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.155244 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.155253 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.155271 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.155282 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:32Z","lastTransitionTime":"2026-02-02T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.257624 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.257657 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.257667 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.257681 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.257691 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:32Z","lastTransitionTime":"2026-02-02T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.360940 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.361002 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.361022 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.361047 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.361066 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:32Z","lastTransitionTime":"2026-02-02T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.452252 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 17:45:33.382124213 +0000 UTC Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.463940 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.464002 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.464045 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.464154 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.464173 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:32Z","lastTransitionTime":"2026-02-02T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.566439 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.566483 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.566502 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.566527 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.566545 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:32Z","lastTransitionTime":"2026-02-02T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.668525 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.668589 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.668604 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.668616 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.668626 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:32Z","lastTransitionTime":"2026-02-02T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.715325 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.715355 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:32 crc kubenswrapper[4955]: E0202 13:03:32.715597 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:32 crc kubenswrapper[4955]: E0202 13:03:32.715665 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.771395 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.771429 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.771438 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.771453 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.771463 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:32Z","lastTransitionTime":"2026-02-02T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.874098 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.874191 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.874225 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.874257 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.874320 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:32Z","lastTransitionTime":"2026-02-02T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.978124 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.978166 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.978177 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.978194 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:32 crc kubenswrapper[4955]: I0202 13:03:32.978204 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:32Z","lastTransitionTime":"2026-02-02T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.081224 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.081282 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.081303 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.081332 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.081355 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:33Z","lastTransitionTime":"2026-02-02T13:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.183477 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.183522 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.183533 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.183549 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.183574 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:33Z","lastTransitionTime":"2026-02-02T13:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.286575 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.286616 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.286626 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.286640 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.286650 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:33Z","lastTransitionTime":"2026-02-02T13:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.388837 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.388881 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.388892 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.388908 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.388921 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:33Z","lastTransitionTime":"2026-02-02T13:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.452615 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 00:56:31.49013728 +0000 UTC Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.497334 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.497379 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.497390 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.497405 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.497415 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:33Z","lastTransitionTime":"2026-02-02T13:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.600084 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.600130 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.600139 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.600153 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.600169 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:33Z","lastTransitionTime":"2026-02-02T13:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.706582 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.706623 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.706633 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.706648 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.706660 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:33Z","lastTransitionTime":"2026-02-02T13:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.715250 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.715282 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:33 crc kubenswrapper[4955]: E0202 13:03:33.715363 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:33 crc kubenswrapper[4955]: E0202 13:03:33.715445 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.809681 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.809758 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.809779 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.809814 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.809837 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:33Z","lastTransitionTime":"2026-02-02T13:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.912875 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.912923 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.912939 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.912961 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:33 crc kubenswrapper[4955]: I0202 13:03:33.912976 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:33Z","lastTransitionTime":"2026-02-02T13:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.015634 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.015718 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.015732 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.015751 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.015763 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:34Z","lastTransitionTime":"2026-02-02T13:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.118272 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.118344 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.118361 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.118384 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.118399 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:34Z","lastTransitionTime":"2026-02-02T13:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.220857 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.220906 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.220918 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.220936 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.220984 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:34Z","lastTransitionTime":"2026-02-02T13:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.323977 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.324016 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.324029 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.324045 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.324055 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:34Z","lastTransitionTime":"2026-02-02T13:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.426070 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.426104 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.426116 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.426131 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.426141 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:34Z","lastTransitionTime":"2026-02-02T13:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.453680 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 03:11:28.291151272 +0000 UTC Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.527987 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.528011 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.528020 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.528033 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.528043 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:34Z","lastTransitionTime":"2026-02-02T13:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.630716 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.630974 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.631038 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.631109 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.631172 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:34Z","lastTransitionTime":"2026-02-02T13:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.715347 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.715399 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:34 crc kubenswrapper[4955]: E0202 13:03:34.715475 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:34 crc kubenswrapper[4955]: E0202 13:03:34.715593 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.732821 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.732999 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.733129 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.733279 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.733406 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:34Z","lastTransitionTime":"2026-02-02T13:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.836818 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.836865 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.836876 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.836893 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.836902 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:34Z","lastTransitionTime":"2026-02-02T13:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.939701 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.939739 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.939750 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.939764 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:34 crc kubenswrapper[4955]: I0202 13:03:34.939774 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:34Z","lastTransitionTime":"2026-02-02T13:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.042214 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.042260 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.042270 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.042284 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.042294 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:35Z","lastTransitionTime":"2026-02-02T13:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.144932 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.144978 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.144987 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.145001 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.145011 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:35Z","lastTransitionTime":"2026-02-02T13:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.247928 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.247971 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.247982 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.247998 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.248010 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:35Z","lastTransitionTime":"2026-02-02T13:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.350593 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.350676 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.350702 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.350731 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.350750 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:35Z","lastTransitionTime":"2026-02-02T13:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.453444 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.453492 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.453513 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.453533 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.453544 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:35Z","lastTransitionTime":"2026-02-02T13:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.453817 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 02:18:41.832215837 +0000 UTC Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.556015 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.556074 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.556088 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.556107 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.556120 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:35Z","lastTransitionTime":"2026-02-02T13:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.658085 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.658133 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.658145 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.658162 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.658175 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:35Z","lastTransitionTime":"2026-02-02T13:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.715632 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.715632 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:35 crc kubenswrapper[4955]: E0202 13:03:35.715790 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:35 crc kubenswrapper[4955]: E0202 13:03:35.715886 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.760336 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.760380 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.760396 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.760416 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.760430 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:35Z","lastTransitionTime":"2026-02-02T13:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.862453 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.862866 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.863119 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.863354 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.863600 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:35Z","lastTransitionTime":"2026-02-02T13:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.965540 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.965870 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.966184 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.966645 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:35 crc kubenswrapper[4955]: I0202 13:03:35.966959 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:35Z","lastTransitionTime":"2026-02-02T13:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.069313 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.070006 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.070126 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.070266 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.070393 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:36Z","lastTransitionTime":"2026-02-02T13:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.172231 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.172500 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.172626 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.172724 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.172807 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:36Z","lastTransitionTime":"2026-02-02T13:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.275052 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.275138 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.275157 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.275188 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.275205 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:36Z","lastTransitionTime":"2026-02-02T13:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.337790 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.337838 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.337848 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.337867 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.337877 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:36Z","lastTransitionTime":"2026-02-02T13:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:36 crc kubenswrapper[4955]: E0202 13:03:36.349348 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.352544 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.352593 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.352603 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.352616 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.352626 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:36Z","lastTransitionTime":"2026-02-02T13:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:36 crc kubenswrapper[4955]: E0202 13:03:36.368810 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.372102 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.372122 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.372129 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.372142 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.372172 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:36Z","lastTransitionTime":"2026-02-02T13:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:36 crc kubenswrapper[4955]: E0202 13:03:36.382538 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.385883 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.385920 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.385932 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.385946 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.385956 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:36Z","lastTransitionTime":"2026-02-02T13:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:36 crc kubenswrapper[4955]: E0202 13:03:36.400206 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.403590 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.403638 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.403654 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.403673 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.403689 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:36Z","lastTransitionTime":"2026-02-02T13:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:36 crc kubenswrapper[4955]: E0202 13:03:36.420619 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:36 crc kubenswrapper[4955]: E0202 13:03:36.420819 4955 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.422184 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.422220 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.422233 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.422251 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.422264 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:36Z","lastTransitionTime":"2026-02-02T13:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.454264 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 14:10:02.071550494 +0000 UTC Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.524550 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.524620 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.524634 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.524659 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.524674 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:36Z","lastTransitionTime":"2026-02-02T13:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.627197 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.627251 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.627263 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.627282 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.627294 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:36Z","lastTransitionTime":"2026-02-02T13:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.715718 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:36 crc kubenswrapper[4955]: E0202 13:03:36.715949 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.716301 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:36 crc kubenswrapper[4955]: E0202 13:03:36.716413 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.718047 4955 scope.go:117] "RemoveContainer" containerID="c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9" Feb 02 13:03:36 crc kubenswrapper[4955]: E0202 13:03:36.718484 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z2cps_openshift-ovn-kubernetes(e0d35d22-ea6a-4ada-a086-b199c153c940)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.730693 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.730733 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.730770 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.730793 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.730803 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:36Z","lastTransitionTime":"2026-02-02T13:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.834109 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.834154 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.834170 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.834188 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.834201 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:36Z","lastTransitionTime":"2026-02-02T13:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.936453 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.936494 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.936506 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.936522 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:36 crc kubenswrapper[4955]: I0202 13:03:36.936535 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:36Z","lastTransitionTime":"2026-02-02T13:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.039509 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.039573 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.039585 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.039600 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.039609 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:37Z","lastTransitionTime":"2026-02-02T13:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.142631 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.142695 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.142714 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.142739 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.142765 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:37Z","lastTransitionTime":"2026-02-02T13:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.245617 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.245650 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.245660 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.245675 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.245684 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:37Z","lastTransitionTime":"2026-02-02T13:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.348302 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.348342 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.348376 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.348395 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.348405 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:37Z","lastTransitionTime":"2026-02-02T13:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.450064 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.450102 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.450125 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.450141 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.450155 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:37Z","lastTransitionTime":"2026-02-02T13:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.454549 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 00:32:45.712263319 +0000 UTC Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.552433 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.552462 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.552471 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.552487 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.552496 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:37Z","lastTransitionTime":"2026-02-02T13:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.655066 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.655143 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.655157 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.655184 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.655198 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:37Z","lastTransitionTime":"2026-02-02T13:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.715686 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.715772 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:37 crc kubenswrapper[4955]: E0202 13:03:37.715862 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:37 crc kubenswrapper[4955]: E0202 13:03:37.716015 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.757143 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.757184 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.757195 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.757211 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.757222 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:37Z","lastTransitionTime":"2026-02-02T13:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.859939 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.859987 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.860000 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.860018 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.860029 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:37Z","lastTransitionTime":"2026-02-02T13:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.963024 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.963058 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.963068 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.963082 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:37 crc kubenswrapper[4955]: I0202 13:03:37.963094 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:37Z","lastTransitionTime":"2026-02-02T13:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.065816 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.065847 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.065858 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.065875 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.065886 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:38Z","lastTransitionTime":"2026-02-02T13:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.168794 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.168824 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.168833 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.168846 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.168855 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:38Z","lastTransitionTime":"2026-02-02T13:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.271051 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.271084 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.271109 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.271126 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.271134 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:38Z","lastTransitionTime":"2026-02-02T13:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.372696 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.372725 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.372734 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.372745 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.372753 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:38Z","lastTransitionTime":"2026-02-02T13:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.455574 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 13:20:30.90408411 +0000 UTC Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.475449 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.475481 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.475490 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.475501 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.475536 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:38Z","lastTransitionTime":"2026-02-02T13:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.577898 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.577950 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.577968 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.577990 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.578006 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:38Z","lastTransitionTime":"2026-02-02T13:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.680746 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.680799 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.680816 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.680839 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.680857 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:38Z","lastTransitionTime":"2026-02-02T13:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.716070 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.716116 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:38 crc kubenswrapper[4955]: E0202 13:03:38.716214 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:38 crc kubenswrapper[4955]: E0202 13:03:38.716262 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.783232 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.783264 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.783276 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.783293 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.783304 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:38Z","lastTransitionTime":"2026-02-02T13:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.886252 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.886303 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.886321 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.886343 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.886358 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:38Z","lastTransitionTime":"2026-02-02T13:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.989035 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.989072 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.989081 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.989095 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:38 crc kubenswrapper[4955]: I0202 13:03:38.989104 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:38Z","lastTransitionTime":"2026-02-02T13:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.091259 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.091297 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.091310 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.091323 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.091334 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:39Z","lastTransitionTime":"2026-02-02T13:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.193185 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.193250 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.193269 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.193294 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.193314 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:39Z","lastTransitionTime":"2026-02-02T13:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.295999 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.296042 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.296051 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.296066 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.296075 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:39Z","lastTransitionTime":"2026-02-02T13:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.398716 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.398762 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.398774 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.398791 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.398805 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:39Z","lastTransitionTime":"2026-02-02T13:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.456383 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 08:04:05.87506674 +0000 UTC Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.501262 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.501300 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.501310 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.501329 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.501339 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:39Z","lastTransitionTime":"2026-02-02T13:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.604154 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.604208 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.604227 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.604251 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.604268 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:39Z","lastTransitionTime":"2026-02-02T13:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.707476 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.707534 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.707595 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.707706 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.707738 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:39Z","lastTransitionTime":"2026-02-02T13:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.715779 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.715866 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:39 crc kubenswrapper[4955]: E0202 13:03:39.716010 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:39 crc kubenswrapper[4955]: E0202 13:03:39.716114 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.739164 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"message\\\":\\\"nil)\\\\nI0202 13:03:23.910950 6621 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-hjcmj\\\\nI0202 13:03:23.910957 6621 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-hjcmj\\\\nF0202 13:03:23.910587 6621 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z]\\\\nI0202 13:03:23.910961 6621 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0202 13:03:23.910941 6621 ovn.go:134] Ensuring zone local for Pod \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z2cps_openshift-ovn-kubernetes(e0d35d22-ea6a-4ada-a086-b199c153c940)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.756170 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjcmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009c80d7-da9c-46cc-b0d2-570de04e6510\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjcmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.773044 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.786790 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.798060 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.809643 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9edb0d45-28ef-4cd7-8a24-c720c2d23382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43b7965e0c9d9b666eb147fcee70339b91cee565f35e63b43715825bc2597559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5caa6fff6cefebcac09ae3e54f2bf8c2358194653265a02b7964103f25fda222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2cq7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.811129 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.811199 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.811215 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.811232 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.811244 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:39Z","lastTransitionTime":"2026-02-02T13:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.825121 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.840752 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.856154 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.866988 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.876916 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.890055 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.901787 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5adc136c-fa74-4369-9f38-1ba52de4ebab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c60e92cee9bd3bccac68de0215ea5cca98cc73f4824943bc418033b72bc4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a731059d0e0d6b7626697d820885c68f344e9194d29cc6fe407b3946dfb2533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75d7d3d3bf40facfdba3d7ca2a10e9c7df2be89e1b605ab6c70ae252978623f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.913145 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.913196 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.913212 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.913236 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.913251 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:39Z","lastTransitionTime":"2026-02-02T13:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.915134 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.926309 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:39 crc kubenswrapper[4955]: I0202 13:03:39.936519 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.015312 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.015343 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.015359 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.015379 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.015390 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:40Z","lastTransitionTime":"2026-02-02T13:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.118848 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.118886 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.118898 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.118915 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.118925 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:40Z","lastTransitionTime":"2026-02-02T13:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.221383 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.221421 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.221434 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.221455 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.221469 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:40Z","lastTransitionTime":"2026-02-02T13:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.368390 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.368441 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.368460 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.368483 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.368498 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:40Z","lastTransitionTime":"2026-02-02T13:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.457503 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 03:04:42.635213244 +0000 UTC Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.470481 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.470515 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.470526 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.470541 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.470568 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:40Z","lastTransitionTime":"2026-02-02T13:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.574266 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.574313 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.574325 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.574344 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.574355 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:40Z","lastTransitionTime":"2026-02-02T13:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.676350 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.676383 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.676395 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.676410 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.676420 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:40Z","lastTransitionTime":"2026-02-02T13:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.715599 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.715641 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:40 crc kubenswrapper[4955]: E0202 13:03:40.715742 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:40 crc kubenswrapper[4955]: E0202 13:03:40.715806 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.779209 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.779244 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.779253 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.779267 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.779276 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:40Z","lastTransitionTime":"2026-02-02T13:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.881108 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.881152 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.881164 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.881184 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.881200 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:40Z","lastTransitionTime":"2026-02-02T13:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.983608 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.983647 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.983659 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.983676 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:40 crc kubenswrapper[4955]: I0202 13:03:40.983688 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:40Z","lastTransitionTime":"2026-02-02T13:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.085941 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.085980 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.085991 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.086008 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.086021 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:41Z","lastTransitionTime":"2026-02-02T13:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.188230 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.188265 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.188274 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.188289 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.188298 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:41Z","lastTransitionTime":"2026-02-02T13:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.291016 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.291068 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.291087 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.291109 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.291127 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:41Z","lastTransitionTime":"2026-02-02T13:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.394170 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.394219 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.394235 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.394258 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.394274 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:41Z","lastTransitionTime":"2026-02-02T13:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.457903 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 13:52:19.836601918 +0000 UTC Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.497239 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.497283 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.497295 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.497312 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.497327 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:41Z","lastTransitionTime":"2026-02-02T13:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.599732 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.599778 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.599790 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.599807 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.599819 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:41Z","lastTransitionTime":"2026-02-02T13:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.701418 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.701463 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.701477 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.701494 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.701505 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:41Z","lastTransitionTime":"2026-02-02T13:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.716056 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:41 crc kubenswrapper[4955]: E0202 13:03:41.716166 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.716332 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:41 crc kubenswrapper[4955]: E0202 13:03:41.716548 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.777886 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs\") pod \"network-metrics-daemon-hjcmj\" (UID: \"009c80d7-da9c-46cc-b0d2-570de04e6510\") " pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:41 crc kubenswrapper[4955]: E0202 13:03:41.778041 4955 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:03:41 crc kubenswrapper[4955]: E0202 13:03:41.778097 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs podName:009c80d7-da9c-46cc-b0d2-570de04e6510 nodeName:}" failed. No retries permitted until 2026-02-02 13:04:13.778080798 +0000 UTC m=+104.690417258 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs") pod "network-metrics-daemon-hjcmj" (UID: "009c80d7-da9c-46cc-b0d2-570de04e6510") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.803664 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.803694 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.803702 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.803716 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.803725 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:41Z","lastTransitionTime":"2026-02-02T13:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.906234 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.906277 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.906293 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.906316 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:41 crc kubenswrapper[4955]: I0202 13:03:41.906328 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:41Z","lastTransitionTime":"2026-02-02T13:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.008873 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.008933 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.008951 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.008974 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.008991 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:42Z","lastTransitionTime":"2026-02-02T13:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.111251 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.111328 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.111352 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.111384 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.111406 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:42Z","lastTransitionTime":"2026-02-02T13:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.216979 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.217118 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.217204 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.217242 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.217276 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:42Z","lastTransitionTime":"2026-02-02T13:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.320429 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.320461 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.320469 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.320482 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.320491 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:42Z","lastTransitionTime":"2026-02-02T13:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.422963 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.423004 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.423016 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.423031 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.423042 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:42Z","lastTransitionTime":"2026-02-02T13:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.458669 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 11:10:51.755503958 +0000 UTC Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.525261 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.525369 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.525386 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.525411 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.525429 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:42Z","lastTransitionTime":"2026-02-02T13:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.627164 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.627208 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.627230 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.627249 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.627261 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:42Z","lastTransitionTime":"2026-02-02T13:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.715994 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.716024 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:42 crc kubenswrapper[4955]: E0202 13:03:42.716123 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:42 crc kubenswrapper[4955]: E0202 13:03:42.716267 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.729637 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.729666 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.729678 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.729695 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.729710 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:42Z","lastTransitionTime":"2026-02-02T13:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.831653 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.831705 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.831723 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.831752 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.831770 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:42Z","lastTransitionTime":"2026-02-02T13:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.933901 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.933935 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.933945 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.933959 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:42 crc kubenswrapper[4955]: I0202 13:03:42.933968 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:42Z","lastTransitionTime":"2026-02-02T13:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.035765 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.035824 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.035839 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.035856 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.035867 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:43Z","lastTransitionTime":"2026-02-02T13:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.137934 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.137985 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.138005 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.138029 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.138048 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:43Z","lastTransitionTime":"2026-02-02T13:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.241935 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.241970 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.241980 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.241995 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.242004 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:43Z","lastTransitionTime":"2026-02-02T13:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.345732 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.345805 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.345828 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.345853 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.345869 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:43Z","lastTransitionTime":"2026-02-02T13:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.448991 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.449031 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.449043 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.449058 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.449069 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:43Z","lastTransitionTime":"2026-02-02T13:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.459683 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 11:58:27.57123791 +0000 UTC Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.551198 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.551237 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.551247 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.551263 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.551272 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:43Z","lastTransitionTime":"2026-02-02T13:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.653344 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.653413 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.653427 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.653450 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.653465 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:43Z","lastTransitionTime":"2026-02-02T13:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.716043 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.716108 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:43 crc kubenswrapper[4955]: E0202 13:03:43.716197 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:43 crc kubenswrapper[4955]: E0202 13:03:43.716327 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.756260 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.756314 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.756327 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.756353 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.756390 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:43Z","lastTransitionTime":"2026-02-02T13:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.859454 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.859520 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.859535 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.859584 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.859598 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:43Z","lastTransitionTime":"2026-02-02T13:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.961985 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.962029 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.962041 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.962057 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:43 crc kubenswrapper[4955]: I0202 13:03:43.962066 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:43Z","lastTransitionTime":"2026-02-02T13:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.064906 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.064986 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.065004 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.065030 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.065047 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:44Z","lastTransitionTime":"2026-02-02T13:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.167429 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.167478 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.167489 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.167508 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.167520 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:44Z","lastTransitionTime":"2026-02-02T13:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.270167 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.270251 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.270449 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.270471 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.270495 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:44Z","lastTransitionTime":"2026-02-02T13:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.372853 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.372896 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.372912 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.372928 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.372938 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:44Z","lastTransitionTime":"2026-02-02T13:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.460758 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:32:32.398457653 +0000 UTC Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.476800 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.476856 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.476875 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.476897 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.476914 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:44Z","lastTransitionTime":"2026-02-02T13:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.580376 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.580406 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.580417 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.580432 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.580442 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:44Z","lastTransitionTime":"2026-02-02T13:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.683091 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.683121 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.683129 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.683141 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.683149 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:44Z","lastTransitionTime":"2026-02-02T13:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.716070 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:44 crc kubenswrapper[4955]: E0202 13:03:44.716286 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.716685 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:44 crc kubenswrapper[4955]: E0202 13:03:44.716838 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.785664 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.785729 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.785744 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.785760 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.785796 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:44Z","lastTransitionTime":"2026-02-02T13:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.887765 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.887811 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.887823 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.887841 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.887853 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:44Z","lastTransitionTime":"2026-02-02T13:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.990545 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.990605 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.990615 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.990629 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:44 crc kubenswrapper[4955]: I0202 13:03:44.990638 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:44Z","lastTransitionTime":"2026-02-02T13:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.093722 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.093772 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.093791 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.093816 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.093834 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:45Z","lastTransitionTime":"2026-02-02T13:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.196938 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.197005 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.197029 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.197058 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.197081 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:45Z","lastTransitionTime":"2026-02-02T13:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.299252 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.299324 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.299337 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.299354 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.299364 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:45Z","lastTransitionTime":"2026-02-02T13:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.401834 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.401876 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.401888 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.401903 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.401914 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:45Z","lastTransitionTime":"2026-02-02T13:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.461909 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:35:47.438536779 +0000 UTC Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.503389 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.503425 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.503436 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.503452 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.503462 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:45Z","lastTransitionTime":"2026-02-02T13:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.605279 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.605333 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.605349 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.605368 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.605381 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:45Z","lastTransitionTime":"2026-02-02T13:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.707450 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.707550 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.707653 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.707674 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.707687 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:45Z","lastTransitionTime":"2026-02-02T13:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.715202 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:45 crc kubenswrapper[4955]: E0202 13:03:45.715313 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.715339 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:45 crc kubenswrapper[4955]: E0202 13:03:45.715445 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.809454 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.809508 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.809524 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.809546 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.809583 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:45Z","lastTransitionTime":"2026-02-02T13:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.911464 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.911501 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.911510 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.911524 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:45 crc kubenswrapper[4955]: I0202 13:03:45.911535 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:45Z","lastTransitionTime":"2026-02-02T13:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.014277 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.014312 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.014325 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.014340 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.014350 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:46Z","lastTransitionTime":"2026-02-02T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.118213 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.118472 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.118533 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.118627 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.118687 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:46Z","lastTransitionTime":"2026-02-02T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.221250 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.221596 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.221698 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.221795 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.221889 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:46Z","lastTransitionTime":"2026-02-02T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.324417 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.324455 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.324465 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.324483 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.324494 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:46Z","lastTransitionTime":"2026-02-02T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.427593 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.427623 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.427631 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.427645 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.427653 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:46Z","lastTransitionTime":"2026-02-02T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.462368 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 03:24:00.630711391 +0000 UTC Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.530309 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.530359 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.530374 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.530392 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.530405 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:46Z","lastTransitionTime":"2026-02-02T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.632219 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.632277 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.632296 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.632317 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.632335 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:46Z","lastTransitionTime":"2026-02-02T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.659735 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.659784 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.659801 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.659835 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.659852 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:46Z","lastTransitionTime":"2026-02-02T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:46 crc kubenswrapper[4955]: E0202 13:03:46.678125 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:46Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.682425 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.682467 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.682483 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.682502 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.682517 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:46Z","lastTransitionTime":"2026-02-02T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:46 crc kubenswrapper[4955]: E0202 13:03:46.703735 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:46Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.708367 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.708408 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.708419 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.708437 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.708450 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:46Z","lastTransitionTime":"2026-02-02T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.715692 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.715701 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:46 crc kubenswrapper[4955]: E0202 13:03:46.715936 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:46 crc kubenswrapper[4955]: E0202 13:03:46.715828 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:46 crc kubenswrapper[4955]: E0202 13:03:46.727632 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:46Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.731487 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.731647 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.731737 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.731817 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.731919 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:46Z","lastTransitionTime":"2026-02-02T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:46 crc kubenswrapper[4955]: E0202 13:03:46.749796 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:46Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.753841 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.753874 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.753885 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.753900 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.753912 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:46Z","lastTransitionTime":"2026-02-02T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:46 crc kubenswrapper[4955]: E0202 13:03:46.766231 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:46Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:46 crc kubenswrapper[4955]: E0202 13:03:46.766353 4955 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.767789 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.767814 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.767822 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.767834 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.767842 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:46Z","lastTransitionTime":"2026-02-02T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.870270 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.870302 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.870311 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.870324 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.870333 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:46Z","lastTransitionTime":"2026-02-02T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.973136 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.973488 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.973658 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.973791 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:46 crc kubenswrapper[4955]: I0202 13:03:46.973922 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:46Z","lastTransitionTime":"2026-02-02T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.076035 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.076091 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.076114 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.076139 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.076157 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:47Z","lastTransitionTime":"2026-02-02T13:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.179214 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.179303 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.179327 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.179356 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.179377 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:47Z","lastTransitionTime":"2026-02-02T13:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.282154 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.282179 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.282188 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.282202 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.282212 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:47Z","lastTransitionTime":"2026-02-02T13:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.384800 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.384882 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.384900 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.384923 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.384942 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:47Z","lastTransitionTime":"2026-02-02T13:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.463146 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 21:41:28.089177333 +0000 UTC Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.487413 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.487451 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.487462 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.487477 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.487488 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:47Z","lastTransitionTime":"2026-02-02T13:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.590133 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.590193 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.590210 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.590234 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.590253 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:47Z","lastTransitionTime":"2026-02-02T13:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.692419 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.692470 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.692483 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.692501 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.692513 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:47Z","lastTransitionTime":"2026-02-02T13:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.715973 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.715997 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:47 crc kubenswrapper[4955]: E0202 13:03:47.716115 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:47 crc kubenswrapper[4955]: E0202 13:03:47.716213 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.794867 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.794920 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.794937 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.794956 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.794973 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:47Z","lastTransitionTime":"2026-02-02T13:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.897225 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.897262 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.897272 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.897285 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:47 crc kubenswrapper[4955]: I0202 13:03:47.897293 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:47Z","lastTransitionTime":"2026-02-02T13:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.000273 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.000336 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.000356 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.000380 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.000397 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:48Z","lastTransitionTime":"2026-02-02T13:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.102423 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.102481 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.102499 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.102538 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.102585 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:48Z","lastTransitionTime":"2026-02-02T13:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.206236 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.206276 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.206286 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.206300 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.206310 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:48Z","lastTransitionTime":"2026-02-02T13:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.309495 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.309584 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.309603 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.309632 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.309650 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:48Z","lastTransitionTime":"2026-02-02T13:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.412749 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.412806 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.412825 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.412850 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.412870 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:48Z","lastTransitionTime":"2026-02-02T13:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.463719 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 04:02:50.836280214 +0000 UTC Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.516277 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.516341 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.516360 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.516384 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.516402 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:48Z","lastTransitionTime":"2026-02-02T13:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.619591 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.619642 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.619654 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.619675 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.619686 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:48Z","lastTransitionTime":"2026-02-02T13:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.653177 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7bpsz_93e471b4-0f7f-4216-8f9c-911f21b64e1e/kube-multus/0.log" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.653258 4955 generic.go:334] "Generic (PLEG): container finished" podID="93e471b4-0f7f-4216-8f9c-911f21b64e1e" containerID="7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981" exitCode=1 Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.653311 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7bpsz" event={"ID":"93e471b4-0f7f-4216-8f9c-911f21b64e1e","Type":"ContainerDied","Data":"7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981"} Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.653901 4955 scope.go:117] "RemoveContainer" containerID="7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.670033 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:48Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.689323 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:48Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.703758 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:48Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.715767 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:48 crc kubenswrapper[4955]: E0202 13:03:48.715901 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.715767 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:48 crc kubenswrapper[4955]: E0202 13:03:48.716062 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.719853 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:48Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.722411 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.722437 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.722446 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.722461 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.722470 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:48Z","lastTransitionTime":"2026-02-02T13:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.739008 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:48Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.754453 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5adc136c-fa74-4369-9f38-1ba52de4ebab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c60e92cee9bd3bccac68de0215ea5cca98cc73f4824943bc418033b72bc4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a731059d0e0d6b7626697d820885c68f344e9194d29cc6fe407b3946dfb2533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75d7d3d3bf40facfdba3d7ca2a10e9c7df2be89e1b605ab6c70ae252978623f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:48Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.768446 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:48Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.780476 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:48Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.801015 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"message\\\":\\\"nil)\\\\nI0202 13:03:23.910950 6621 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-hjcmj\\\\nI0202 13:03:23.910957 6621 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-hjcmj\\\\nF0202 13:03:23.910587 6621 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z]\\\\nI0202 13:03:23.910961 6621 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0202 13:03:23.910941 6621 ovn.go:134] Ensuring zone local for Pod \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z2cps_openshift-ovn-kubernetes(e0d35d22-ea6a-4ada-a086-b199c153c940)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:48Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.812496 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjcmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009c80d7-da9c-46cc-b0d2-570de04e6510\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjcmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:48Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.824929 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.824962 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.824971 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.824985 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.824997 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:48Z","lastTransitionTime":"2026-02-02T13:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.825541 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:48Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.838118 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:48Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.848950 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:48Z\\\",\\\"message\\\":\\\"2026-02-02T13:03:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_dc4b8bef-57f5-4238-b76f-c4e6aa17e96c\\\\n2026-02-02T13:03:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_dc4b8bef-57f5-4238-b76f-c4e6aa17e96c to /host/opt/cni/bin/\\\\n2026-02-02T13:03:03Z [verbose] multus-daemon started\\\\n2026-02-02T13:03:03Z [verbose] Readiness Indicator file check\\\\n2026-02-02T13:03:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:48Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.860038 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:48Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.871991 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9edb0d45-28ef-4cd7-8a24-c720c2d23382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43b7965e0c9d9b666eb147fcee70339b91cee565f35e63b43715825bc2597559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5caa6fff6cefebcac09ae3e54f2bf8c2358194653265a02b7964103f25fda222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2cq7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:48Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.888217 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:48Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.927386 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.927419 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.927428 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.927442 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:48 crc kubenswrapper[4955]: I0202 13:03:48.927453 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:48Z","lastTransitionTime":"2026-02-02T13:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.029887 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.029922 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.029936 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.030045 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.030059 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:49Z","lastTransitionTime":"2026-02-02T13:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.133115 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.133161 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.133175 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.133193 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.133207 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:49Z","lastTransitionTime":"2026-02-02T13:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.236254 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.236306 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.236318 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.236334 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.236351 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:49Z","lastTransitionTime":"2026-02-02T13:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.340013 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.340093 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.340107 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.340125 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.340161 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:49Z","lastTransitionTime":"2026-02-02T13:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.442657 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.442711 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.442723 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.442744 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.442760 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:49Z","lastTransitionTime":"2026-02-02T13:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.464747 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 08:47:11.540258483 +0000 UTC Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.546023 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.546075 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.546087 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.546107 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.546121 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:49Z","lastTransitionTime":"2026-02-02T13:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.648366 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.648414 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.648426 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.648443 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.648455 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:49Z","lastTransitionTime":"2026-02-02T13:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.657595 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7bpsz_93e471b4-0f7f-4216-8f9c-911f21b64e1e/kube-multus/0.log" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.657657 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7bpsz" event={"ID":"93e471b4-0f7f-4216-8f9c-911f21b64e1e","Type":"ContainerStarted","Data":"a5cdc1e1f460fc68836a837b81dca1dec0597e760917853b09087a008ecdf8cb"} Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.677779 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:49Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.689851 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:49Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.715751 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.715765 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"message\\\":\\\"nil)\\\\nI0202 13:03:23.910950 6621 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-hjcmj\\\\nI0202 13:03:23.910957 6621 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-hjcmj\\\\nF0202 13:03:23.910587 6621 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z]\\\\nI0202 13:03:23.910961 6621 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0202 13:03:23.910941 6621 ovn.go:134] Ensuring zone local for Pod \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z2cps_openshift-ovn-kubernetes(e0d35d22-ea6a-4ada-a086-b199c153c940)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:49Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:49 crc kubenswrapper[4955]: E0202 13:03:49.715960 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.716059 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:49 crc kubenswrapper[4955]: E0202 13:03:49.716163 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.728195 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjcmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009c80d7-da9c-46cc-b0d2-570de04e6510\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjcmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:49Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.733638 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.751906 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.751957 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.751974 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.751999 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.752017 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:49Z","lastTransitionTime":"2026-02-02T13:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.752111 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:49Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.766238 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:49Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.783946 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:49Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.807030 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cdc1e1f460fc68836a837b81dca1dec0597e760917853b09087a008ecdf8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:48Z\\\",\\\"message\\\":\\\"2026-02-02T13:03:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_dc4b8bef-57f5-4238-b76f-c4e6aa17e96c\\\\n2026-02-02T13:03:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_dc4b8bef-57f5-4238-b76f-c4e6aa17e96c to /host/opt/cni/bin/\\\\n2026-02-02T13:03:03Z [verbose] multus-daemon started\\\\n2026-02-02T13:03:03Z [verbose] Readiness Indicator file check\\\\n2026-02-02T13:03:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:49Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.827288 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:49Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.844779 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9edb0d45-28ef-4cd7-8a24-c720c2d23382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43b7965e0c9d9b666eb147fcee70339b91cee565f35e63b43715825bc2597559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5caa6fff6cefebcac09ae3e54f2bf8c2358194653265a02b7964103f25fda222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2cq7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:49Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.854947 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.855021 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.855035 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.855054 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.855066 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:49Z","lastTransitionTime":"2026-02-02T13:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.860128 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5adc136c-fa74-4369-9f38-1ba52de4ebab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c60e92cee9bd3bccac68de0215ea5cca98cc73f4824943bc418033b72bc4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a731059d0e0d6b7626697d820885c68f344e9194d29cc6fe407b3946dfb2533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75d7d3d3bf40facfdba3d7ca2a10e9c7df2be89e1b605ab6c70ae252978623f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:49Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.876740 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:49Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.891744 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:49Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.909224 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:49Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.922117 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:49Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.933862 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:49Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.945311 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9edb0d45-28ef-4cd7-8a24-c720c2d23382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43b7965e0c9d9b666eb147fcee70339b91cee565f35e63b43715825bc2597559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5caa6fff6cefebcac09ae3e54f2bf8c2358194653265a02b7964103f25fda222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2cq7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:49Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.956864 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.956900 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.956913 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.956929 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.956939 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:49Z","lastTransitionTime":"2026-02-02T13:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.959075 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:49Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.971661 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:49Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:49 crc kubenswrapper[4955]: I0202 13:03:49.987804 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:49Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.001190 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cdc1e1f460fc68836a837b81dca1dec0597e760917853b09087a008ecdf8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:48Z\\\",\\\"message\\\":\\\"2026-02-02T13:03:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_dc4b8bef-57f5-4238-b76f-c4e6aa17e96c\\\\n2026-02-02T13:03:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_dc4b8bef-57f5-4238-b76f-c4e6aa17e96c to /host/opt/cni/bin/\\\\n2026-02-02T13:03:03Z [verbose] multus-daemon started\\\\n2026-02-02T13:03:03Z [verbose] Readiness Indicator file check\\\\n2026-02-02T13:03:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:49Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.013076 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.022624 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.034445 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5adc136c-fa74-4369-9f38-1ba52de4ebab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c60e92cee9bd3bccac68de0215ea5cca98cc73f4824943bc418033b72bc4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a731059d0e0d6b7626697d820885c68f344e9194d29cc6fe407b3946dfb2533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75d7d3d3bf40facfdba3d7ca2a10e9c7df2be89e1b605ab6c70ae252978623f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.059895 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.059972 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.059998 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.060032 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.060056 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:50Z","lastTransitionTime":"2026-02-02T13:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.072328 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.098807 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.111462 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.122005 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.141199 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db82613-2215-46e0-b765-78468c33dca2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38752ea10ee618d200ad022f1a1c2310db4ebe6e6df323d5c72e93df56a5d976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81aacf34084ecf71590ddfa05746a8a44dcf932aa599453ccfc97d87a3c208c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://852cf92983990f7b9d8ee62f9ee1757642ed7f01f10d4f1e58626dd053918352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ddd999d11629a86b2948c037a0560c0865171ba7e6ca44ea8bfc6c57a1f40a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdea6aaf929fcddb47ebb0271cb7a35d0c9e5ac23590c3ba9a8d14e905f4c8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81b633833b3730bfe97d367fe8677e69d729e38888fe2c7b67dbd2b1606467e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81b633833b3730bfe97d367fe8677e69d729e38888fe2c7b67dbd2b1606467e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586764d58ad530c9248fe63cc65a27d200479adabd0e73d353985054a9798c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://586764d58ad530c9248fe63cc65a27d200479adabd0e73d353985054a9798c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c726e7b4639a62eb1769988a3046c6b113c2c3d9eee33ea657539aaf06ebfd7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c726e7b4639a62eb1769988a3046c6b113c2c3d9eee33ea657539aaf06ebfd7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.152354 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.163185 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.163226 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.163238 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.163254 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.163267 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:50Z","lastTransitionTime":"2026-02-02T13:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.168641 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.187352 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"message\\\":\\\"nil)\\\\nI0202 13:03:23.910950 6621 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-hjcmj\\\\nI0202 13:03:23.910957 6621 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-hjcmj\\\\nF0202 13:03:23.910587 6621 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z]\\\\nI0202 13:03:23.910961 6621 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0202 13:03:23.910941 6621 ovn.go:134] Ensuring zone local for Pod \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z2cps_openshift-ovn-kubernetes(e0d35d22-ea6a-4ada-a086-b199c153c940)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.196931 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjcmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009c80d7-da9c-46cc-b0d2-570de04e6510\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjcmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.264926 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.264962 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.264973 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.264987 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.264996 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:50Z","lastTransitionTime":"2026-02-02T13:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.368009 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.368060 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.368079 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.368103 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.368120 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:50Z","lastTransitionTime":"2026-02-02T13:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.465690 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 18:56:48.414527848 +0000 UTC Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.476473 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.476553 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.476596 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.476621 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.476639 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:50Z","lastTransitionTime":"2026-02-02T13:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.579426 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.579465 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.579476 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.579492 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.579502 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:50Z","lastTransitionTime":"2026-02-02T13:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.682210 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.682275 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.682289 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.682314 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.682327 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:50Z","lastTransitionTime":"2026-02-02T13:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.715891 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.716238 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:50 crc kubenswrapper[4955]: E0202 13:03:50.716329 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:50 crc kubenswrapper[4955]: E0202 13:03:50.716507 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.730796 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.785406 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.785472 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.785491 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.785515 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.785534 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:50Z","lastTransitionTime":"2026-02-02T13:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.888810 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.888868 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.888887 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.888910 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.888928 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:50Z","lastTransitionTime":"2026-02-02T13:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.991918 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.992011 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.992046 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.992076 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:50 crc kubenswrapper[4955]: I0202 13:03:50.992098 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:50Z","lastTransitionTime":"2026-02-02T13:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.094179 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.094242 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.094260 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.094285 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.094304 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:51Z","lastTransitionTime":"2026-02-02T13:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.196442 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.196472 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.196481 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.196493 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.196502 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:51Z","lastTransitionTime":"2026-02-02T13:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.299357 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.299426 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.299444 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.299472 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.299491 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:51Z","lastTransitionTime":"2026-02-02T13:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.402843 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.402902 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.402920 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.402945 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.402964 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:51Z","lastTransitionTime":"2026-02-02T13:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.466039 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 21:08:16.853283072 +0000 UTC Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.505633 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.505720 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.505743 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.505774 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.505801 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:51Z","lastTransitionTime":"2026-02-02T13:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.608105 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.608148 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.608161 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.608177 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.608189 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:51Z","lastTransitionTime":"2026-02-02T13:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.710344 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.710395 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.710410 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.710429 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.710445 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:51Z","lastTransitionTime":"2026-02-02T13:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.716001 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:51 crc kubenswrapper[4955]: E0202 13:03:51.716211 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.716505 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:51 crc kubenswrapper[4955]: E0202 13:03:51.716672 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.718007 4955 scope.go:117] "RemoveContainer" containerID="c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.813127 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.813171 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.813183 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.813203 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.813215 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:51Z","lastTransitionTime":"2026-02-02T13:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.916193 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.916240 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.916258 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.916282 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:51 crc kubenswrapper[4955]: I0202 13:03:51.916296 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:51Z","lastTransitionTime":"2026-02-02T13:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.019155 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.019202 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.019216 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.019234 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.019254 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:52Z","lastTransitionTime":"2026-02-02T13:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.121817 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.121860 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.121873 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.121890 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.121902 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:52Z","lastTransitionTime":"2026-02-02T13:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.224036 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.224083 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.224093 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.224107 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.224119 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:52Z","lastTransitionTime":"2026-02-02T13:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.326245 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.326295 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.326308 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.326327 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.326339 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:52Z","lastTransitionTime":"2026-02-02T13:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.428305 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.428352 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.428364 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.428384 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.428396 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:52Z","lastTransitionTime":"2026-02-02T13:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.466841 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 23:14:29.351813275 +0000 UTC Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.531292 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.531349 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.531363 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.531380 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.531393 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:52Z","lastTransitionTime":"2026-02-02T13:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.633496 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.633523 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.633532 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.633544 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.633572 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:52Z","lastTransitionTime":"2026-02-02T13:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.670531 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z2cps_e0d35d22-ea6a-4ada-a086-b199c153c940/ovnkube-controller/3.log" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.671494 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z2cps_e0d35d22-ea6a-4ada-a086-b199c153c940/ovnkube-controller/2.log" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.674851 4955 generic.go:334] "Generic (PLEG): container finished" podID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerID="5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e" exitCode=1 Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.674908 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerDied","Data":"5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e"} Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.674990 4955 scope.go:117] "RemoveContainer" containerID="c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.677197 4955 scope.go:117] "RemoveContainer" containerID="5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e" Feb 02 13:03:52 crc kubenswrapper[4955]: E0202 13:03:52.679053 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z2cps_openshift-ovn-kubernetes(e0d35d22-ea6a-4ada-a086-b199c153c940)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.697455 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0b8365e25383a82f2ac07533ec589043999588e9d60deba627fdfd09101a2a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"message\\\":\\\"nil)\\\\nI0202 13:03:23.910950 6621 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-hjcmj\\\\nI0202 13:03:23.910957 6621 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-hjcmj\\\\nF0202 13:03:23.910587 6621 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:23Z is after 2025-08-24T17:21:41Z]\\\\nI0202 13:03:23.910961 6621 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0202 13:03:23.910941 6621 ovn.go:134] Ensuring zone local for Pod \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:52Z\\\",\\\"message\\\":\\\"0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:03:52.550886 7062 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:03:52.547961 7062 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0202 13:03:52.551108 7062 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.708420 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjcmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009c80d7-da9c-46cc-b0d2-570de04e6510\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjcmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.715914 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.716034 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:52 crc kubenswrapper[4955]: E0202 13:03:52.716131 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:52 crc kubenswrapper[4955]: E0202 13:03:52.716255 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.720500 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.735955 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.736010 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.736029 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.736053 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.736074 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:52Z","lastTransitionTime":"2026-02-02T13:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.736820 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.756941 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.774428 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cdc1e1f460fc68836a837b81dca1dec0597e760917853b09087a008ecdf8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:48Z\\\",\\\"message\\\":\\\"2026-02-02T13:03:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_dc4b8bef-57f5-4238-b76f-c4e6aa17e96c\\\\n2026-02-02T13:03:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_dc4b8bef-57f5-4238-b76f-c4e6aa17e96c to /host/opt/cni/bin/\\\\n2026-02-02T13:03:03Z [verbose] multus-daemon started\\\\n2026-02-02T13:03:03Z [verbose] Readiness Indicator file check\\\\n2026-02-02T13:03:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.790766 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.802511 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9edb0d45-28ef-4cd7-8a24-c720c2d23382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43b7965e0c9d9b666eb147fcee70339b91cee565f35e63b43715825bc2597559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5caa6fff6cefebcac09ae3e54f2bf8c2358194653265a02b7964103f25fda222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2cq7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.817148 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5adc136c-fa74-4369-9f38-1ba52de4ebab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c60e92cee9bd3bccac68de0215ea5cca98cc73f4824943bc418033b72bc4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a731059d0e0d6b7626697d820885c68f344e9194d29cc6fe407b3946dfb2533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75d7d3d3bf40facfdba3d7ca2a10e9c7df2be89e1b605ab6c70ae252978623f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.830218 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07da3eeb-75df-4077-bacc-91c7492b8390\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b3a3306313c83e5b30cad24fd57a77b254ceaa21df4228d109f47cde1aa378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1895d7edbdecaa563513027fcb9df8e48b89c5e36fdda597fa9f9270da5d0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1895d7edbdecaa563513027fcb9df8e48b89c5e36fdda597fa9f9270da5d0172\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.838322 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.838356 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.838366 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.838378 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.838388 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:52Z","lastTransitionTime":"2026-02-02T13:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.848771 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.861797 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.875474 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.886125 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.899514 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.921534 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db82613-2215-46e0-b765-78468c33dca2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38752ea10ee618d200ad022f1a1c2310db4ebe6e6df323d5c72e93df56a5d976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81aacf34084ecf71590ddfa05746a8a44dcf932aa599453ccfc97d87a3c208c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://852cf92983990f7b9d8ee62f9ee1757642ed7f01f10d4f1e58626dd053918352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ddd999d11629a86b2948c037a0560c0865171ba7e6ca44ea8bfc6c57a1f40a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdea6aaf929fcddb47ebb0271cb7a35d0c9e5ac23590c3ba9a8d14e905f4c8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81b633833b3730bfe97d367fe8677e69d729e38888fe2c7b67dbd2b1606467e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81b633833b3730bfe97d367fe8677e69d729e38888fe2c7b67dbd2b1606467e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586764d58ad530c9248fe63cc65a27d200479adabd0e73d353985054a9798c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://586764d58ad530c9248fe63cc65a27d200479adabd0e73d353985054a9798c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c726e7b4639a62eb1769988a3046c6b113c2c3d9eee33ea657539aaf06ebfd7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c726e7b4639a62eb1769988a3046c6b113c2c3d9eee33ea657539aaf06ebfd7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.935580 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.940040 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.940081 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.940093 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.940110 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.940122 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:52Z","lastTransitionTime":"2026-02-02T13:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:52 crc kubenswrapper[4955]: I0202 13:03:52.947899 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.042833 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.042875 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.042887 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.042905 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.042918 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:53Z","lastTransitionTime":"2026-02-02T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.145139 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.145190 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.145209 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.145235 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.145253 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:53Z","lastTransitionTime":"2026-02-02T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.175344 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.248410 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.248474 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.248483 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.248499 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.248527 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:53Z","lastTransitionTime":"2026-02-02T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.351374 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.351413 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.351424 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.351439 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.351448 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:53Z","lastTransitionTime":"2026-02-02T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.454685 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.454730 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.454744 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.454761 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.454773 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:53Z","lastTransitionTime":"2026-02-02T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.467705 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 13:28:45.302368818 +0000 UTC Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.557010 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.557062 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.557072 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.557088 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.557097 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:53Z","lastTransitionTime":"2026-02-02T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.659477 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.659910 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.659928 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.659965 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.659983 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:53Z","lastTransitionTime":"2026-02-02T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.679262 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z2cps_e0d35d22-ea6a-4ada-a086-b199c153c940/ovnkube-controller/3.log" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.682684 4955 scope.go:117] "RemoveContainer" containerID="5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e" Feb 02 13:03:53 crc kubenswrapper[4955]: E0202 13:03:53.682902 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z2cps_openshift-ovn-kubernetes(e0d35d22-ea6a-4ada-a086-b199c153c940)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.697201 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:53 crc kubenswrapper[4955]: E0202 13:03:53.697309 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:04:57.69728771 +0000 UTC m=+148.609624180 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.697385 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.697440 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.697493 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.697529 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:53 crc kubenswrapper[4955]: E0202 13:03:53.698098 4955 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:03:53 crc kubenswrapper[4955]: E0202 13:03:53.698152 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:04:57.698136901 +0000 UTC m=+148.610473351 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:03:53 crc kubenswrapper[4955]: E0202 13:03:53.698325 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:03:53 crc kubenswrapper[4955]: E0202 13:03:53.698337 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:03:53 crc kubenswrapper[4955]: E0202 13:03:53.698347 4955 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:03:53 crc kubenswrapper[4955]: E0202 13:03:53.698381 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 13:04:57.698374287 +0000 UTC m=+148.610710737 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:03:53 crc kubenswrapper[4955]: E0202 13:03:53.698714 4955 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:03:53 crc kubenswrapper[4955]: E0202 13:03:53.698743 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:04:57.698734807 +0000 UTC m=+148.611071257 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:03:53 crc kubenswrapper[4955]: E0202 13:03:53.698802 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:03:53 crc kubenswrapper[4955]: E0202 13:03:53.698861 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:03:53 crc kubenswrapper[4955]: E0202 13:03:53.698878 4955 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:03:53 crc kubenswrapper[4955]: E0202 13:03:53.698929 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 13:04:57.698914431 +0000 UTC m=+148.611250981 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.701451 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db82613-2215-46e0-b765-78468c33dca2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38752ea10ee618d200ad022f1a1c2310db4ebe6e6df323d5c72e93df56a5d976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81aacf34084ecf71590ddfa05746a8a44dcf932aa599453ccfc97d87a3c208c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://852cf92983990f7b9d8ee62f9ee1757642ed7f01f10d4f1e58626dd053918352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ddd999d11629a86b2948c037a0560c0865171ba7e6ca44ea8bfc6c57a1f40a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdea6aaf929fcddb47ebb0271cb7a35d0c9e5ac23590c3ba9a8d14e905f4c8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81b633833b3730bfe97d367fe8677e69d729e38888fe2c7b67dbd2b1606467e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81b633833b3730bfe97d367fe8677e69d729e38888fe2c7b67dbd2b1606467e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586764d58ad530c9248fe63cc65a27d200479adabd0e73d353985054a9798c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://586764d58ad530c9248fe63cc65a27d200479adabd0e73d353985054a9798c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c726e7b4639a62eb1769988a3046c6b113c2c3d9eee33ea657539aaf06ebfd7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c726e7b4639a62eb1769988a3046c6b113c2c3d9eee33ea657539aaf06ebfd7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.713542 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.717309 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:53 crc kubenswrapper[4955]: E0202 13:03:53.717692 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.718073 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:53 crc kubenswrapper[4955]: E0202 13:03:53.718385 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.725439 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.745847 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:52Z\\\",\\\"message\\\":\\\"0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:03:52.550886 7062 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:03:52.547961 7062 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0202 13:03:52.551108 7062 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z2cps_openshift-ovn-kubernetes(e0d35d22-ea6a-4ada-a086-b199c153c940)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.757785 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjcmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009c80d7-da9c-46cc-b0d2-570de04e6510\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjcmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.762829 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.762856 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.762866 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.762879 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.762889 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:53Z","lastTransitionTime":"2026-02-02T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.775094 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.787308 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.805436 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.817775 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cdc1e1f460fc68836a837b81dca1dec0597e760917853b09087a008ecdf8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:48Z\\\",\\\"message\\\":\\\"2026-02-02T13:03:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_dc4b8bef-57f5-4238-b76f-c4e6aa17e96c\\\\n2026-02-02T13:03:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_dc4b8bef-57f5-4238-b76f-c4e6aa17e96c to /host/opt/cni/bin/\\\\n2026-02-02T13:03:03Z [verbose] multus-daemon started\\\\n2026-02-02T13:03:03Z [verbose] Readiness Indicator file check\\\\n2026-02-02T13:03:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.829466 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.845433 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9edb0d45-28ef-4cd7-8a24-c720c2d23382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43b7965e0c9d9b666eb147fcee70339b91cee565f35e63b43715825bc2597559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5caa6fff6cefebcac09ae3e54f2bf8c2358194653265a02b7964103f25fda222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2cq7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.860943 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5adc136c-fa74-4369-9f38-1ba52de4ebab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c60e92cee9bd3bccac68de0215ea5cca98cc73f4824943bc418033b72bc4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a731059d0e0d6b7626697d820885c68f344e9194d29cc6fe407b3946dfb2533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75d7d3d3bf40facfdba3d7ca2a10e9c7df2be89e1b605ab6c70ae252978623f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.864529 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.864574 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.864584 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.864596 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.864605 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:53Z","lastTransitionTime":"2026-02-02T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.874640 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07da3eeb-75df-4077-bacc-91c7492b8390\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b3a3306313c83e5b30cad24fd57a77b254ceaa21df4228d109f47cde1aa378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1895d7edbdecaa563513027fcb9df8e48b89c5e36fdda597fa9f9270da5d0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1895d7edbdecaa563513027fcb9df8e48b89c5e36fdda597fa9f9270da5d0172\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.890250 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.905384 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.920287 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.932985 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.947130 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.967191 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.967267 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.967279 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.967293 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:53 crc kubenswrapper[4955]: I0202 13:03:53.967303 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:53Z","lastTransitionTime":"2026-02-02T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.070021 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.070078 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.070092 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.070108 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.070121 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:54Z","lastTransitionTime":"2026-02-02T13:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.172322 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.172370 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.172382 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.172474 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.172487 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:54Z","lastTransitionTime":"2026-02-02T13:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.275090 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.275130 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.275140 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.275156 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.275166 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:54Z","lastTransitionTime":"2026-02-02T13:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.380693 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.380769 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.380793 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.380826 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.380847 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:54Z","lastTransitionTime":"2026-02-02T13:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.468763 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 06:15:43.460888491 +0000 UTC Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.484074 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.484139 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.484162 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.484187 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.484206 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:54Z","lastTransitionTime":"2026-02-02T13:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.587710 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.587787 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.587806 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.587834 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.587852 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:54Z","lastTransitionTime":"2026-02-02T13:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.690767 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.690828 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.690845 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.690893 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.690912 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:54Z","lastTransitionTime":"2026-02-02T13:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.715650 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.715713 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:54 crc kubenswrapper[4955]: E0202 13:03:54.715828 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:54 crc kubenswrapper[4955]: E0202 13:03:54.716272 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.793683 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.793759 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.793785 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.793815 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.793838 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:54Z","lastTransitionTime":"2026-02-02T13:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.896939 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.897024 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.897044 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.897068 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:54 crc kubenswrapper[4955]: I0202 13:03:54.897092 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:54Z","lastTransitionTime":"2026-02-02T13:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.000753 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.000841 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.000861 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.000887 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.000907 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:55Z","lastTransitionTime":"2026-02-02T13:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.104321 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.104404 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.104428 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.104470 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.104489 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:55Z","lastTransitionTime":"2026-02-02T13:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.208413 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.208487 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.208509 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.208536 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.208599 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:55Z","lastTransitionTime":"2026-02-02T13:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.314340 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.314391 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.314462 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.314495 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.314508 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:55Z","lastTransitionTime":"2026-02-02T13:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.420032 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.420094 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.420222 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.420262 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.420285 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:55Z","lastTransitionTime":"2026-02-02T13:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.469879 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 03:47:49.272177581 +0000 UTC Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.522795 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.522861 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.522879 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.522902 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.522921 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:55Z","lastTransitionTime":"2026-02-02T13:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.625470 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.625572 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.625614 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.625631 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.625641 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:55Z","lastTransitionTime":"2026-02-02T13:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.716097 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:55 crc kubenswrapper[4955]: E0202 13:03:55.716474 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.716499 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:55 crc kubenswrapper[4955]: E0202 13:03:55.716833 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.727548 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.727945 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.728264 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.728401 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.728517 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:55Z","lastTransitionTime":"2026-02-02T13:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.729941 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.831549 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.831592 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.831600 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.831612 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.831621 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:55Z","lastTransitionTime":"2026-02-02T13:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.934052 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.934103 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.934113 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.934144 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:55 crc kubenswrapper[4955]: I0202 13:03:55.934152 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:55Z","lastTransitionTime":"2026-02-02T13:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.036327 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.036387 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.036409 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.036437 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.036458 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:56Z","lastTransitionTime":"2026-02-02T13:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.139858 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.140161 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.140179 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.140200 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.140214 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:56Z","lastTransitionTime":"2026-02-02T13:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.243056 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.243100 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.243115 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.243135 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.243149 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:56Z","lastTransitionTime":"2026-02-02T13:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.349021 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.349095 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.349113 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.349137 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.349157 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:56Z","lastTransitionTime":"2026-02-02T13:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.452483 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.452993 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.453215 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.453360 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.453515 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:56Z","lastTransitionTime":"2026-02-02T13:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.470479 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:55:05.919682077 +0000 UTC Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.556818 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.556883 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.556902 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.556926 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.556943 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:56Z","lastTransitionTime":"2026-02-02T13:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.660665 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.660726 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.660743 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.660765 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.660785 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:56Z","lastTransitionTime":"2026-02-02T13:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.716292 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.716335 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:56 crc kubenswrapper[4955]: E0202 13:03:56.716615 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:56 crc kubenswrapper[4955]: E0202 13:03:56.716802 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.764008 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.764071 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.764089 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.764115 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.764134 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:56Z","lastTransitionTime":"2026-02-02T13:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.866885 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.866957 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.866977 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.867004 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.867022 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:56Z","lastTransitionTime":"2026-02-02T13:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.937482 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.937626 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.937671 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.937711 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.937734 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:56Z","lastTransitionTime":"2026-02-02T13:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:56 crc kubenswrapper[4955]: E0202 13:03:56.961282 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.966180 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.966218 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.966228 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.966242 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.966254 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:56Z","lastTransitionTime":"2026-02-02T13:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:56 crc kubenswrapper[4955]: E0202 13:03:56.986435 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.991409 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.991469 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.991524 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.991548 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:56 crc kubenswrapper[4955]: I0202 13:03:56.991616 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:56Z","lastTransitionTime":"2026-02-02T13:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:57 crc kubenswrapper[4955]: E0202 13:03:57.010744 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.015269 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.015347 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.015381 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.015412 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.015435 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:57Z","lastTransitionTime":"2026-02-02T13:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:57 crc kubenswrapper[4955]: E0202 13:03:57.038441 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.043853 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.043889 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.043903 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.043922 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.043940 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:57Z","lastTransitionTime":"2026-02-02T13:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:57 crc kubenswrapper[4955]: E0202 13:03:57.060829 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:57Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:57 crc kubenswrapper[4955]: E0202 13:03:57.060996 4955 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.063006 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.063037 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.063048 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.063066 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.063077 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:57Z","lastTransitionTime":"2026-02-02T13:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.165595 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.165649 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.165666 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.165684 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.165696 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:57Z","lastTransitionTime":"2026-02-02T13:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.268907 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.268983 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.269006 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.269032 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.269049 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:57Z","lastTransitionTime":"2026-02-02T13:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.371907 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.372012 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.372038 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.372068 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.372091 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:57Z","lastTransitionTime":"2026-02-02T13:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.471663 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 00:24:17.00068461 +0000 UTC Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.474641 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.474678 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.474691 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.474707 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.474721 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:57Z","lastTransitionTime":"2026-02-02T13:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.577158 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.577192 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.577203 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.577216 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.577225 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:57Z","lastTransitionTime":"2026-02-02T13:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.679755 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.679789 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.679798 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.679811 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.679821 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:57Z","lastTransitionTime":"2026-02-02T13:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.716139 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:57 crc kubenswrapper[4955]: E0202 13:03:57.716290 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.716628 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:57 crc kubenswrapper[4955]: E0202 13:03:57.716725 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.782167 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.782215 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.782224 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.782240 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.782254 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:57Z","lastTransitionTime":"2026-02-02T13:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.884228 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.884283 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.884294 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.884310 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.884322 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:57Z","lastTransitionTime":"2026-02-02T13:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.986649 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.986688 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.986699 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.986714 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:57 crc kubenswrapper[4955]: I0202 13:03:57.986726 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:57Z","lastTransitionTime":"2026-02-02T13:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.088500 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.088535 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.088547 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.088574 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.088583 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:58Z","lastTransitionTime":"2026-02-02T13:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.190966 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.190995 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.191007 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.191054 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.191064 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:58Z","lastTransitionTime":"2026-02-02T13:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.293488 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.293573 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.293585 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.293603 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.293614 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:58Z","lastTransitionTime":"2026-02-02T13:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.396246 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.396284 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.396294 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.396310 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.396322 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:58Z","lastTransitionTime":"2026-02-02T13:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.472789 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 21:01:04.303755818 +0000 UTC Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.499245 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.499296 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.499307 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.499322 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.499333 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:58Z","lastTransitionTime":"2026-02-02T13:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.602123 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.602176 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.602191 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.602212 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.602227 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:58Z","lastTransitionTime":"2026-02-02T13:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.704395 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.704468 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.704492 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.704520 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.704542 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:58Z","lastTransitionTime":"2026-02-02T13:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.716032 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.716112 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:58 crc kubenswrapper[4955]: E0202 13:03:58.716156 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:58 crc kubenswrapper[4955]: E0202 13:03:58.716259 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.808390 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.808460 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.808479 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.808505 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.808524 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:58Z","lastTransitionTime":"2026-02-02T13:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.911492 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.911613 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.911652 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.911684 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:58 crc kubenswrapper[4955]: I0202 13:03:58.911705 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:58Z","lastTransitionTime":"2026-02-02T13:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.015040 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.015096 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.015114 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.015138 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.015155 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:59Z","lastTransitionTime":"2026-02-02T13:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.117468 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.117499 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.117507 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.117520 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.117529 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:59Z","lastTransitionTime":"2026-02-02T13:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.220977 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.221034 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.221087 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.221112 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.221132 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:59Z","lastTransitionTime":"2026-02-02T13:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.324708 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.324789 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.324809 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.324834 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.324855 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:59Z","lastTransitionTime":"2026-02-02T13:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.428120 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.428170 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.428182 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.428200 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.428211 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:59Z","lastTransitionTime":"2026-02-02T13:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.473890 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 20:58:19.479955929 +0000 UTC Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.531588 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.531627 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.531662 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.531681 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.531694 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:59Z","lastTransitionTime":"2026-02-02T13:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.634719 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.634815 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.634880 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.634912 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.634981 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:59Z","lastTransitionTime":"2026-02-02T13:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.715947 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:59 crc kubenswrapper[4955]: E0202 13:03:59.716148 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.717695 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:03:59 crc kubenswrapper[4955]: E0202 13:03:59.717858 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.736654 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.739590 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.739657 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.739675 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.739703 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.739730 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:59Z","lastTransitionTime":"2026-02-02T13:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.759004 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.773976 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.793961 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5adc136c-fa74-4369-9f38-1ba52de4ebab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c60e92cee9bd3bccac68de0215ea5cca98cc73f4824943bc418033b72bc4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a731059d0e0d6b7626697d820885c68f344e9194d29cc6fe407b3946dfb2533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75d7d3d3bf40facfdba3d7ca2a10e9c7df2be89e1b605ab6c70ae252978623f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.813671 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07da3eeb-75df-4077-bacc-91c7492b8390\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b3a3306313c83e5b30cad24fd57a77b254ceaa21df4228d109f47cde1aa378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1895d7edbdecaa563513027fcb9df8e48b89c5e36fdda597fa9f9270da5d0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1895d7edbdecaa563513027fcb9df8e48b89c5e36fdda597fa9f9270da5d0172\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.832898 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.844190 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.844233 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.844244 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.844261 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.844273 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:59Z","lastTransitionTime":"2026-02-02T13:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.848869 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.874780 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db82613-2215-46e0-b765-78468c33dca2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38752ea10ee618d200ad022f1a1c2310db4ebe6e6df323d5c72e93df56a5d976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81aacf34084ecf71590ddfa05746a8a44dcf932aa599453ccfc97d87a3c208c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://852cf92983990f7b9d8ee62f9ee1757642ed7f01f10d4f1e58626dd053918352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ddd999d11629a86b2948c037a0560c0865171ba7e6ca44ea8bfc6c57a1f40a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdea6aaf929fcddb47ebb0271cb7a35d0c9e5ac23590c3ba9a8d14e905f4c8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81b633833b3730bfe97d367fe8677e69d729e38888fe2c7b67dbd2b1606467e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81b633833b3730bfe97d367fe8677e69d729e38888fe2c7b67dbd2b1606467e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586764d58ad530c9248fe63cc65a27d200479adabd0e73d353985054a9798c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://586764d58ad530c9248fe63cc65a27d200479adabd0e73d353985054a9798c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c726e7b4639a62eb1769988a3046c6b113c2c3d9eee33ea657539aaf06ebfd7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c726e7b4639a62eb1769988a3046c6b113c2c3d9eee33ea657539aaf06ebfd7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.888480 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.899797 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.926719 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:52Z\\\",\\\"message\\\":\\\"0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:03:52.550886 7062 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:03:52.547961 7062 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0202 13:03:52.551108 7062 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z2cps_openshift-ovn-kubernetes(e0d35d22-ea6a-4ada-a086-b199c153c940)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.941267 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjcmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009c80d7-da9c-46cc-b0d2-570de04e6510\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjcmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.946222 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.946259 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.946271 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.946291 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.946307 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:03:59Z","lastTransitionTime":"2026-02-02T13:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.958100 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cdc1e1f460fc68836a837b81dca1dec0597e760917853b09087a008ecdf8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:48Z\\\",\\\"message\\\":\\\"2026-02-02T13:03:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_dc4b8bef-57f5-4238-b76f-c4e6aa17e96c\\\\n2026-02-02T13:03:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_dc4b8bef-57f5-4238-b76f-c4e6aa17e96c to /host/opt/cni/bin/\\\\n2026-02-02T13:03:03Z [verbose] multus-daemon started\\\\n2026-02-02T13:03:03Z [verbose] Readiness Indicator file check\\\\n2026-02-02T13:03:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.971847 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:03:59 crc kubenswrapper[4955]: I0202 13:03:59.984830 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9edb0d45-28ef-4cd7-8a24-c720c2d23382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43b7965e0c9d9b666eb147fcee70339b91cee565f35e63b43715825bc2597559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5caa6fff6cefebcac09ae3e54f2bf8c2358194653265a02b7964103f25fda222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2cq7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.000322 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:03:59Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.016474 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"280cbe3b-6377-40e9-99e9-5e64737d449c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f5e5956114c1979f525d57c7df146bdd9ff455a6532ee2d5528f33a2f47b53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://995a8c5f4d865cd956d49e6e7702944feb1fad045e53ed4e8e23e31c495443c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://960f402caf1caf531e49391c1aa0e9e58a06ac82db00ad44d31c51a4b40319e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be602f3228acff6d3c11b288e4e8e5572680422268b7a0a8a952416fd03370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:00Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.028967 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:00Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.043616 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:00Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.049114 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.049160 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.049173 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.049193 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.049206 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:00Z","lastTransitionTime":"2026-02-02T13:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.152494 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.152547 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.152601 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.152632 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.152650 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:00Z","lastTransitionTime":"2026-02-02T13:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.255018 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.255077 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.255133 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.255162 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.255181 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:00Z","lastTransitionTime":"2026-02-02T13:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.358251 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.358336 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.358358 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.358386 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.358405 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:00Z","lastTransitionTime":"2026-02-02T13:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.461869 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.461960 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.461986 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.462015 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.462038 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:00Z","lastTransitionTime":"2026-02-02T13:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.475039 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 06:39:17.989029144 +0000 UTC Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.564381 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.564430 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.564451 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.564475 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.564492 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:00Z","lastTransitionTime":"2026-02-02T13:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.667273 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.667321 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.667335 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.667353 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.667366 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:00Z","lastTransitionTime":"2026-02-02T13:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.716015 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.716123 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:00 crc kubenswrapper[4955]: E0202 13:04:00.716212 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:00 crc kubenswrapper[4955]: E0202 13:04:00.716312 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.770647 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.770699 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.770718 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.770743 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.770764 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:00Z","lastTransitionTime":"2026-02-02T13:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.872683 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.872755 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.872781 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.872815 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.872840 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:00Z","lastTransitionTime":"2026-02-02T13:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.976723 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.976799 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.976823 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.976852 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:00 crc kubenswrapper[4955]: I0202 13:04:00.976880 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:00Z","lastTransitionTime":"2026-02-02T13:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.079087 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.079127 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.079140 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.079155 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.079166 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:01Z","lastTransitionTime":"2026-02-02T13:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.181585 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.181614 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.181626 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.181640 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.181649 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:01Z","lastTransitionTime":"2026-02-02T13:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.283909 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.283976 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.283995 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.284019 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.284039 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:01Z","lastTransitionTime":"2026-02-02T13:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.387053 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.387139 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.387161 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.387192 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.387212 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:01Z","lastTransitionTime":"2026-02-02T13:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.475505 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 14:49:57.134131509 +0000 UTC Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.490590 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.490646 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.490658 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.490680 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.490694 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:01Z","lastTransitionTime":"2026-02-02T13:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.594227 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.594297 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.594315 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.594346 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.594366 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:01Z","lastTransitionTime":"2026-02-02T13:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.698809 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.698865 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.698899 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.698926 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.698945 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:01Z","lastTransitionTime":"2026-02-02T13:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.715954 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.716042 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:01 crc kubenswrapper[4955]: E0202 13:04:01.716165 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:01 crc kubenswrapper[4955]: E0202 13:04:01.716240 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.801905 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.801954 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.801979 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.801993 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.802002 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:01Z","lastTransitionTime":"2026-02-02T13:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.903756 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.903818 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.903837 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.903860 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:01 crc kubenswrapper[4955]: I0202 13:04:01.903878 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:01Z","lastTransitionTime":"2026-02-02T13:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.006426 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.006473 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.006484 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.006500 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.006511 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:02Z","lastTransitionTime":"2026-02-02T13:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.109303 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.109349 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.109364 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.109381 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.109393 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:02Z","lastTransitionTime":"2026-02-02T13:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.212735 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.212788 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.212799 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.212817 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.212826 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:02Z","lastTransitionTime":"2026-02-02T13:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.315821 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.315875 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.315890 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.315912 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.315926 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:02Z","lastTransitionTime":"2026-02-02T13:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.418337 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.418382 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.418394 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.418411 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.418423 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:02Z","lastTransitionTime":"2026-02-02T13:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.476347 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 02:32:27.370781235 +0000 UTC Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.521058 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.521100 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.521119 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.521140 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.521156 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:02Z","lastTransitionTime":"2026-02-02T13:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.623334 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.623399 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.623422 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.623453 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.623475 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:02Z","lastTransitionTime":"2026-02-02T13:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.715616 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.715716 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:02 crc kubenswrapper[4955]: E0202 13:04:02.715807 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:02 crc kubenswrapper[4955]: E0202 13:04:02.716051 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.725034 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.725104 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.725115 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.725126 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.725136 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:02Z","lastTransitionTime":"2026-02-02T13:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.827878 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.827919 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.827931 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.827948 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.827961 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:02Z","lastTransitionTime":"2026-02-02T13:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.930305 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.930359 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.930376 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.930397 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:02 crc kubenswrapper[4955]: I0202 13:04:02.930412 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:02Z","lastTransitionTime":"2026-02-02T13:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.032811 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.032862 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.032874 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.032890 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.032904 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:03Z","lastTransitionTime":"2026-02-02T13:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.135621 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.135662 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.135675 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.135692 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.135705 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:03Z","lastTransitionTime":"2026-02-02T13:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.238378 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.238413 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.238422 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.238435 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.238445 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:03Z","lastTransitionTime":"2026-02-02T13:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.341106 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.341154 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.341171 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.341193 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.341210 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:03Z","lastTransitionTime":"2026-02-02T13:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.444248 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.444303 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.444331 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.444360 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.444377 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:03Z","lastTransitionTime":"2026-02-02T13:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.476929 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 02:12:53.844146235 +0000 UTC Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.547256 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.547303 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.547319 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.547338 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.547350 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:03Z","lastTransitionTime":"2026-02-02T13:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.649815 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.649860 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.649873 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.649889 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.649900 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:03Z","lastTransitionTime":"2026-02-02T13:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.715254 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:03 crc kubenswrapper[4955]: E0202 13:04:03.715432 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.715498 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:03 crc kubenswrapper[4955]: E0202 13:04:03.715683 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.752424 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.752466 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.752479 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.752496 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.752509 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:03Z","lastTransitionTime":"2026-02-02T13:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.855662 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.855760 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.855780 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.855802 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.855820 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:03Z","lastTransitionTime":"2026-02-02T13:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.958638 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.958701 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.958719 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.958745 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:03 crc kubenswrapper[4955]: I0202 13:04:03.958762 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:03Z","lastTransitionTime":"2026-02-02T13:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.060888 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.060930 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.060942 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.060957 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.060967 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:04Z","lastTransitionTime":"2026-02-02T13:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.164292 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.164340 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.164361 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.164389 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.164408 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:04Z","lastTransitionTime":"2026-02-02T13:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.267014 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.267042 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.267051 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.267063 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.267072 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:04Z","lastTransitionTime":"2026-02-02T13:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.369810 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.369846 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.369856 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.369870 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.369879 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:04Z","lastTransitionTime":"2026-02-02T13:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.472509 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.472551 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.472584 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.472603 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.472614 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:04Z","lastTransitionTime":"2026-02-02T13:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.477700 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 09:44:44.055771348 +0000 UTC Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.575672 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.575980 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.575991 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.576009 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.576020 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:04Z","lastTransitionTime":"2026-02-02T13:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.678331 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.678374 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.678396 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.678415 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.678429 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:04Z","lastTransitionTime":"2026-02-02T13:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.715331 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.715392 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:04 crc kubenswrapper[4955]: E0202 13:04:04.715478 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:04 crc kubenswrapper[4955]: E0202 13:04:04.715597 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.780777 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.780815 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.780831 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.780845 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.780855 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:04Z","lastTransitionTime":"2026-02-02T13:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.882835 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.882872 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.882881 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.882893 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.882902 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:04Z","lastTransitionTime":"2026-02-02T13:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.985858 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.985925 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.985944 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.985970 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:04 crc kubenswrapper[4955]: I0202 13:04:04.985987 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:04Z","lastTransitionTime":"2026-02-02T13:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.088229 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.088300 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.088318 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.088333 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.088343 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:05Z","lastTransitionTime":"2026-02-02T13:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.191015 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.191062 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.191072 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.191086 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.191098 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:05Z","lastTransitionTime":"2026-02-02T13:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.293377 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.293432 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.293445 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.293463 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.293473 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:05Z","lastTransitionTime":"2026-02-02T13:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.396070 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.396114 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.396125 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.396138 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.396150 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:05Z","lastTransitionTime":"2026-02-02T13:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.478715 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 14:54:16.766331974 +0000 UTC Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.498702 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.498752 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.498771 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.498790 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.498802 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:05Z","lastTransitionTime":"2026-02-02T13:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.601709 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.601800 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.601834 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.601862 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.601883 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:05Z","lastTransitionTime":"2026-02-02T13:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.703614 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.703661 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.703671 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.703683 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.703694 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:05Z","lastTransitionTime":"2026-02-02T13:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.716273 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.716293 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:05 crc kubenswrapper[4955]: E0202 13:04:05.716413 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:05 crc kubenswrapper[4955]: E0202 13:04:05.716793 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.717072 4955 scope.go:117] "RemoveContainer" containerID="5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e" Feb 02 13:04:05 crc kubenswrapper[4955]: E0202 13:04:05.717269 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z2cps_openshift-ovn-kubernetes(e0d35d22-ea6a-4ada-a086-b199c153c940)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.805727 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.805803 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.805826 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.805858 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.805879 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:05Z","lastTransitionTime":"2026-02-02T13:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.908861 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.908898 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.908909 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.908924 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:05 crc kubenswrapper[4955]: I0202 13:04:05.908937 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:05Z","lastTransitionTime":"2026-02-02T13:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.011046 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.011120 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.011140 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.011175 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.011199 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:06Z","lastTransitionTime":"2026-02-02T13:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.112837 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.112880 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.112891 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.112906 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.112916 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:06Z","lastTransitionTime":"2026-02-02T13:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.215697 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.215744 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.215755 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.215773 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.215791 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:06Z","lastTransitionTime":"2026-02-02T13:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.318128 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.318171 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.318183 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.318199 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.318210 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:06Z","lastTransitionTime":"2026-02-02T13:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.421953 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.422025 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.422043 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.422066 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.422089 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:06Z","lastTransitionTime":"2026-02-02T13:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.479287 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 00:17:38.111562536 +0000 UTC Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.524874 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.524942 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.524961 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.524985 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.525003 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:06Z","lastTransitionTime":"2026-02-02T13:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.627930 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.627976 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.627987 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.628002 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.628011 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:06Z","lastTransitionTime":"2026-02-02T13:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.715229 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:06 crc kubenswrapper[4955]: E0202 13:04:06.715357 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.715234 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:06 crc kubenswrapper[4955]: E0202 13:04:06.715587 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.731039 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.731086 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.731098 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.731115 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.731129 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:06Z","lastTransitionTime":"2026-02-02T13:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.833332 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.833399 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.833416 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.833432 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.833444 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:06Z","lastTransitionTime":"2026-02-02T13:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.936043 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.936094 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.936109 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.936127 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:06 crc kubenswrapper[4955]: I0202 13:04:06.936140 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:06Z","lastTransitionTime":"2026-02-02T13:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.039138 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.039196 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.039215 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.039238 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.039257 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:07Z","lastTransitionTime":"2026-02-02T13:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.080423 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.080468 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.080480 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.080497 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.080509 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:07Z","lastTransitionTime":"2026-02-02T13:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:07 crc kubenswrapper[4955]: E0202 13:04:07.100370 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:07Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.105417 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.105514 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.105539 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.105611 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.105634 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:07Z","lastTransitionTime":"2026-02-02T13:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:07 crc kubenswrapper[4955]: E0202 13:04:07.120548 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:07Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.125754 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.125823 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.125845 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.125870 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.125889 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:07Z","lastTransitionTime":"2026-02-02T13:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:07 crc kubenswrapper[4955]: E0202 13:04:07.141151 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:07Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.146177 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.146227 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.146240 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.146260 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.146274 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:07Z","lastTransitionTime":"2026-02-02T13:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:07 crc kubenswrapper[4955]: E0202 13:04:07.160116 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:07Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.165879 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.165922 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.165950 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.165971 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.165985 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:07Z","lastTransitionTime":"2026-02-02T13:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:07 crc kubenswrapper[4955]: E0202 13:04:07.179474 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:04:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"98377b24-b2ba-4f17-bb8d-a7b7a933930f\\\",\\\"systemUUID\\\":\\\"a1e684ba-38f3-4fac-88c1-b29f9c39bcf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:07Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:07 crc kubenswrapper[4955]: E0202 13:04:07.179643 4955 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.181983 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.182057 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.182082 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.182112 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.182131 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:07Z","lastTransitionTime":"2026-02-02T13:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.285319 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.285716 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.285750 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.285775 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.285793 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:07Z","lastTransitionTime":"2026-02-02T13:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.389934 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.390005 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.390026 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.390058 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.390078 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:07Z","lastTransitionTime":"2026-02-02T13:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.480198 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 09:41:08.327644176 +0000 UTC Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.492687 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.492757 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.492778 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.492805 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.492823 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:07Z","lastTransitionTime":"2026-02-02T13:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.595880 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.595931 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.595949 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.595975 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.595993 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:07Z","lastTransitionTime":"2026-02-02T13:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.699068 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.699126 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.699141 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.699164 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.699177 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:07Z","lastTransitionTime":"2026-02-02T13:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.715821 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.715984 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:07 crc kubenswrapper[4955]: E0202 13:04:07.716110 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:07 crc kubenswrapper[4955]: E0202 13:04:07.716213 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.802297 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.802332 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.802341 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.802353 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.802362 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:07Z","lastTransitionTime":"2026-02-02T13:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.904590 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.904634 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.904671 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.904688 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:07 crc kubenswrapper[4955]: I0202 13:04:07.904697 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:07Z","lastTransitionTime":"2026-02-02T13:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.006960 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.007052 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.007081 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.007110 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.007133 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:08Z","lastTransitionTime":"2026-02-02T13:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.110095 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.110154 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.110171 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.110193 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.110209 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:08Z","lastTransitionTime":"2026-02-02T13:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.214778 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.214863 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.214903 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.214926 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.214938 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:08Z","lastTransitionTime":"2026-02-02T13:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.318821 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.318889 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.318914 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.318946 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.318968 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:08Z","lastTransitionTime":"2026-02-02T13:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.422308 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.422374 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.422392 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.422419 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.422436 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:08Z","lastTransitionTime":"2026-02-02T13:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.480812 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 17:06:49.751756806 +0000 UTC Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.525541 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.525643 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.525667 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.525696 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.525715 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:08Z","lastTransitionTime":"2026-02-02T13:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.629470 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.629541 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.629590 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.629621 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.629642 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:08Z","lastTransitionTime":"2026-02-02T13:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.715632 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.715789 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:08 crc kubenswrapper[4955]: E0202 13:04:08.715843 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:08 crc kubenswrapper[4955]: E0202 13:04:08.716051 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.732609 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.732660 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.732670 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.732687 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.732697 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:08Z","lastTransitionTime":"2026-02-02T13:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.836602 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.836686 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.836706 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.836734 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.836755 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:08Z","lastTransitionTime":"2026-02-02T13:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.940133 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.940216 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.940238 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.940273 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:08 crc kubenswrapper[4955]: I0202 13:04:08.940295 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:08Z","lastTransitionTime":"2026-02-02T13:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.043135 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.043193 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.043212 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.043235 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.043252 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:09Z","lastTransitionTime":"2026-02-02T13:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.146622 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.146693 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.146704 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.146723 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.146749 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:09Z","lastTransitionTime":"2026-02-02T13:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.250188 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.250264 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.250288 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.250320 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.250344 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:09Z","lastTransitionTime":"2026-02-02T13:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.353453 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.353518 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.353543 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.353619 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.353644 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:09Z","lastTransitionTime":"2026-02-02T13:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.455815 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.455884 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.455902 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.455927 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.455946 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:09Z","lastTransitionTime":"2026-02-02T13:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.481840 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 06:51:06.103783149 +0000 UTC Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.559123 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.559197 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.559223 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.559252 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.559275 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:09Z","lastTransitionTime":"2026-02-02T13:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.661773 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.661846 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.661868 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.661891 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.661908 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:09Z","lastTransitionTime":"2026-02-02T13:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.716119 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.716142 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:09 crc kubenswrapper[4955]: E0202 13:04:09.716346 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:09 crc kubenswrapper[4955]: E0202 13:04:09.716512 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.734021 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7bpsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93e471b4-0f7f-4216-8f9c-911f21b64e1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cdc1e1f460fc68836a837b81dca1dec0597e760917853b09087a008ecdf8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:48Z\\\",\\\"message\\\":\\\"2026-02-02T13:03:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_dc4b8bef-57f5-4238-b76f-c4e6aa17e96c\\\\n2026-02-02T13:03:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_dc4b8bef-57f5-4238-b76f-c4e6aa17e96c to /host/opt/cni/bin/\\\\n2026-02-02T13:03:03Z [verbose] multus-daemon started\\\\n2026-02-02T13:03:03Z [verbose] Readiness Indicator file check\\\\n2026-02-02T13:03:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8vlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7bpsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.746421 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f37534-569f-4b2e-989a-f95866cb79e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e83c36175265cbdf6bc59c525a7b9764d1f1b73b6e4ff264fd576d845c212ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ppf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6l62h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.761843 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9edb0d45-28ef-4cd7-8a24-c720c2d23382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43b7965e0c9d9b666eb147fcee70339b91cee565f35e63b43715825bc2597559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5caa6fff6cefebcac09ae3e54f2bf8c2358194653265a02b7964103f25fda222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbf4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2cq7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.765162 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.765225 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.765252 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.765282 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.765303 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:09Z","lastTransitionTime":"2026-02-02T13:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.783153 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a18b769-e25b-453d-9617-219f7e480b33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 13:02:49.373264 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 13:02:49.373451 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:02:49.374444 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196111394/tls.crt::/tmp/serving-cert-4196111394/tls.key\\\\\\\"\\\\nI0202 13:02:49.720690 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:02:49.724257 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:02:49.724284 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:02:49.724576 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:02:49.724595 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:02:49.732513 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:02:49.732545 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:02:49.732571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:02:49.732544 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:02:49.732574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:02:49.732599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:02:49.732602 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:02:49.735214 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.800206 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"280cbe3b-6377-40e9-99e9-5e64737d449c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f5e5956114c1979f525d57c7df146bdd9ff455a6532ee2d5528f33a2f47b53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://995a8c5f4d865cd956d49e6e7702944feb1fad045e53ed4e8e23e31c495443c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://960f402caf1caf531e49391c1aa0e9e58a06ac82db00ad44d31c51a4b40319e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be602f3228acff6d3c11b288e4e8e5572680422268b7a0a8a952416fd03370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.819499 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3854d5d63340eaa3333fd92f6b23697827a5bb47469ec3fa6fedb5c7bc967d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae3e0e524141502beb7e878ca783f0c2e03aadb44809e203cf06ced80312f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.838056 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rplmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"959a2015-a670-4ebe-b0a1-d18c1b44cb4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e714526a16e226d9e5f8b1a3ddbd3c5032d0626f8f6f30219a07a29b6020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5664d491a7907c28b1186276ce170e9b1745c48fcf8dff3782c64554246ed641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec96d8539bf81da764cf01837c3645bccdd45322d56f330c3c85d7b546e43abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50655ff99a36aafedc440b5eed484e20319236e662f026a064524eddbeda1132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe4601fe3999afc36a5f4d0047370d33fa5bc8509baafe369f4bd22edefb3d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0248176cec12f9912b0a8d9173afeb40b3e32639ce0febe8f22d2526122cc32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd9dc311f7e78a5853b9079c690b7fd19285a1e1b0f6c07a7e2e2e7f0305e53a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:03:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dlw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rplmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.857238 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1757ade9a9460d277e8faa8e49a430fe680a36ab474aa4bc76e406390e70d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.868687 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.868833 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.868871 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.868901 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.868952 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:09Z","lastTransitionTime":"2026-02-02T13:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.871756 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dxh2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a985401e-d37f-4c38-a506-93c3f3ccd986\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5b2bb827327d090b9ea8ae2a8c985e8b0fc170aa28961dfa54b445f734ce55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42lc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dxh2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.886378 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-crzll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ff5509a-2943-4526-8bcc-900dca52a6b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e414d79ff929bb7547e54231a7de098684cffc01f1b24588ad5bff4f399146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79hdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-crzll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.898488 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5adc136c-fa74-4369-9f38-1ba52de4ebab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c60e92cee9bd3bccac68de0215ea5cca98cc73f4824943bc418033b72bc4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a731059d0e0d6b7626697d820885c68f344e9194d29cc6fe407b3946dfb2533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75d7d3d3bf40facfdba3d7ca2a10e9c7df2be89e1b605ab6c70ae252978623f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f32a7c687e7971ba2e5063b637cca2d2f751aaa60d8dff51ab8d26cd09f6aa37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.914965 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07da3eeb-75df-4077-bacc-91c7492b8390\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b3a3306313c83e5b30cad24fd57a77b254ceaa21df4228d109f47cde1aa378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1895d7edbdecaa563513027fcb9df8e48b89c5e36fdda597fa9f9270da5d0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1895d7edbdecaa563513027fcb9df8e48b89c5e36fdda597fa9f9270da5d0172\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.933368 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d24828c53062a10c456e48229c7446f8bb5031f822bf79982b5b7b807ff1c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.950106 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.974937 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db82613-2215-46e0-b765-78468c33dca2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38752ea10ee618d200ad022f1a1c2310db4ebe6e6df323d5c72e93df56a5d976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81aacf34084ecf71590ddfa05746a8a44dcf932aa599453ccfc97d87a3c208c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://852cf92983990f7b9d8ee62f9ee1757642ed7f01f10d4f1e58626dd053918352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ddd999d11629a86b2948c037a0560c0865171ba7e6ca44ea8bfc6c57a1f40a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdea6aaf929fcddb47ebb0271cb7a35d0c9e5ac23590c3ba9a8d14e905f4c8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81b633833b3730bfe97d367fe8677e69d729e38888fe2c7b67dbd2b1606467e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81b633833b3730bfe97d367fe8677e69d729e38888fe2c7b67dbd2b1606467e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586764d58ad530c9248fe63cc65a27d200479adabd0e73d353985054a9798c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://586764d58ad530c9248fe63cc65a27d200479adabd0e73d353985054a9798c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c726e7b4639a62eb1769988a3046c6b113c2c3d9eee33ea657539aaf06ebfd7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c726e7b4639a62eb1769988a3046c6b113c2c3d9eee33ea657539aaf06ebfd7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.978254 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.978335 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.978379 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.978417 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.978443 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:09Z","lastTransitionTime":"2026-02-02T13:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:09 crc kubenswrapper[4955]: I0202 13:04:09.993122 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:09Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.014092 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.038361 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d35d22-ea6a-4ada-a086-b199c153c940\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:03:52Z\\\",\\\"message\\\":\\\"0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:03:52.550886 7062 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:03:52.547961 7062 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0202 13:03:52.551108 7062 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:03:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z2cps_openshift-ovn-kubernetes(e0d35d22-ea6a-4ada-a086-b199c153c940)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lr87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:02:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z2cps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.050728 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjcmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009c80d7-da9c-46cc-b0d2-570de04e6510\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f5mnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:03:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjcmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:04:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.080727 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.080784 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.080803 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.080827 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.080846 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:10Z","lastTransitionTime":"2026-02-02T13:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.183787 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.183836 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.183855 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.183879 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.183896 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:10Z","lastTransitionTime":"2026-02-02T13:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.286889 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.286960 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.286988 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.287022 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.287046 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:10Z","lastTransitionTime":"2026-02-02T13:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.389793 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.389875 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.389895 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.389924 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.389945 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:10Z","lastTransitionTime":"2026-02-02T13:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.482871 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 22:44:35.101443284 +0000 UTC Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.492940 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.492990 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.493006 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.493025 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.493041 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:10Z","lastTransitionTime":"2026-02-02T13:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.597994 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.598074 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.598121 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.598157 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.598181 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:10Z","lastTransitionTime":"2026-02-02T13:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.702276 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.702375 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.702393 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.702415 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.702429 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:10Z","lastTransitionTime":"2026-02-02T13:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.715862 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.715939 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:10 crc kubenswrapper[4955]: E0202 13:04:10.716073 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:10 crc kubenswrapper[4955]: E0202 13:04:10.716185 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.805171 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.805285 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.805298 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.805315 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.805325 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:10Z","lastTransitionTime":"2026-02-02T13:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.907986 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.908090 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.908106 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.908124 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:10 crc kubenswrapper[4955]: I0202 13:04:10.908138 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:10Z","lastTransitionTime":"2026-02-02T13:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.011218 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.011291 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.011318 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.011350 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.011372 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:11Z","lastTransitionTime":"2026-02-02T13:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.115408 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.115482 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.115507 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.115539 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.115620 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:11Z","lastTransitionTime":"2026-02-02T13:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.219137 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.219192 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.219208 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.219230 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.219242 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:11Z","lastTransitionTime":"2026-02-02T13:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.322676 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.322783 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.322803 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.322832 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.322849 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:11Z","lastTransitionTime":"2026-02-02T13:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.426442 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.426550 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.426600 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.426629 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.426647 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:11Z","lastTransitionTime":"2026-02-02T13:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.483094 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 10:26:37.576633571 +0000 UTC Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.530484 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.530612 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.530637 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.530663 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.530683 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:11Z","lastTransitionTime":"2026-02-02T13:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.633723 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.633808 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.633828 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.633859 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.633878 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:11Z","lastTransitionTime":"2026-02-02T13:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.716373 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.716508 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:11 crc kubenswrapper[4955]: E0202 13:04:11.716521 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:11 crc kubenswrapper[4955]: E0202 13:04:11.716777 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.736580 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.736641 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.736660 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.736683 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.736701 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:11Z","lastTransitionTime":"2026-02-02T13:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.840122 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.840180 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.840201 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.840231 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.840249 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:11Z","lastTransitionTime":"2026-02-02T13:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.943214 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.943284 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.943311 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.943342 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:11 crc kubenswrapper[4955]: I0202 13:04:11.943364 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:11Z","lastTransitionTime":"2026-02-02T13:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.046924 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.047013 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.047039 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.047071 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.047091 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:12Z","lastTransitionTime":"2026-02-02T13:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.150294 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.150400 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.150421 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.150450 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.150477 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:12Z","lastTransitionTime":"2026-02-02T13:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.253542 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.253694 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.253726 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.253769 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.253799 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:12Z","lastTransitionTime":"2026-02-02T13:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.357423 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.357497 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.357517 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.357544 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.357614 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:12Z","lastTransitionTime":"2026-02-02T13:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.464812 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.464860 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.464874 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.465067 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.465077 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:12Z","lastTransitionTime":"2026-02-02T13:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.484301 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 03:02:55.26891726 +0000 UTC Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.567300 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.567350 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.567362 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.567378 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.567392 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:12Z","lastTransitionTime":"2026-02-02T13:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.669751 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.669780 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.669791 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.669805 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.669815 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:12Z","lastTransitionTime":"2026-02-02T13:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.716080 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.716141 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:12 crc kubenswrapper[4955]: E0202 13:04:12.716251 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:12 crc kubenswrapper[4955]: E0202 13:04:12.716339 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.771520 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.771624 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.771637 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.771653 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.771665 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:12Z","lastTransitionTime":"2026-02-02T13:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.873925 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.873980 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.873996 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.874018 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.874034 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:12Z","lastTransitionTime":"2026-02-02T13:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.976480 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.976529 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.976544 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.976592 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:12 crc kubenswrapper[4955]: I0202 13:04:12.976610 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:12Z","lastTransitionTime":"2026-02-02T13:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.079881 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.079967 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.079981 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.080000 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.080023 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:13Z","lastTransitionTime":"2026-02-02T13:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.182884 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.182916 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.182925 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.182938 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.182947 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:13Z","lastTransitionTime":"2026-02-02T13:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.285978 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.286035 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.286056 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.286081 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.286099 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:13Z","lastTransitionTime":"2026-02-02T13:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.388192 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.388232 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.388245 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.388264 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.388279 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:13Z","lastTransitionTime":"2026-02-02T13:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.485077 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 21:20:45.528628742 +0000 UTC Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.491267 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.491342 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.491366 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.491508 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.491610 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:13Z","lastTransitionTime":"2026-02-02T13:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.595231 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.595296 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.595314 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.595338 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.595356 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:13Z","lastTransitionTime":"2026-02-02T13:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.697713 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.697766 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.697782 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.697802 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.697819 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:13Z","lastTransitionTime":"2026-02-02T13:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.715633 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.715671 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:13 crc kubenswrapper[4955]: E0202 13:04:13.715746 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:13 crc kubenswrapper[4955]: E0202 13:04:13.715988 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.800073 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.800119 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.800135 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.800155 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.800171 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:13Z","lastTransitionTime":"2026-02-02T13:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.819708 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs\") pod \"network-metrics-daemon-hjcmj\" (UID: \"009c80d7-da9c-46cc-b0d2-570de04e6510\") " pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:13 crc kubenswrapper[4955]: E0202 13:04:13.819902 4955 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:04:13 crc kubenswrapper[4955]: E0202 13:04:13.819954 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs podName:009c80d7-da9c-46cc-b0d2-570de04e6510 nodeName:}" failed. No retries permitted until 2026-02-02 13:05:17.819939193 +0000 UTC m=+168.732275643 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs") pod "network-metrics-daemon-hjcmj" (UID: "009c80d7-da9c-46cc-b0d2-570de04e6510") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.902203 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.902251 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.902267 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.902286 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:13 crc kubenswrapper[4955]: I0202 13:04:13.902299 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:13Z","lastTransitionTime":"2026-02-02T13:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.005525 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.005629 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.005654 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.005686 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.005711 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:14Z","lastTransitionTime":"2026-02-02T13:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.108736 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.108779 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.108790 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.108803 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.108813 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:14Z","lastTransitionTime":"2026-02-02T13:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.211309 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.211360 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.211372 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.211395 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.211407 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:14Z","lastTransitionTime":"2026-02-02T13:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.313970 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.314031 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.314050 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.314077 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.314095 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:14Z","lastTransitionTime":"2026-02-02T13:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.416742 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.416794 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.416807 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.416825 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.416841 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:14Z","lastTransitionTime":"2026-02-02T13:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.486112 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 05:34:54.482816048 +0000 UTC Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.519448 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.519507 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.519525 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.519547 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.519602 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:14Z","lastTransitionTime":"2026-02-02T13:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.623043 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.623091 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.623105 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.623123 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.623136 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:14Z","lastTransitionTime":"2026-02-02T13:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.715215 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.715366 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:14 crc kubenswrapper[4955]: E0202 13:04:14.715476 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:14 crc kubenswrapper[4955]: E0202 13:04:14.715693 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.725935 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.726025 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.726051 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.726127 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.726153 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:14Z","lastTransitionTime":"2026-02-02T13:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.828910 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.828945 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.828958 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.828972 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.828982 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:14Z","lastTransitionTime":"2026-02-02T13:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.931324 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.931372 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.931383 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.931397 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:14 crc kubenswrapper[4955]: I0202 13:04:14.931406 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:14Z","lastTransitionTime":"2026-02-02T13:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.033830 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.033883 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.033898 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.033917 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.033929 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:15Z","lastTransitionTime":"2026-02-02T13:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.136614 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.136670 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.136685 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.136704 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.136717 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:15Z","lastTransitionTime":"2026-02-02T13:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.239360 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.239427 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.239443 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.239459 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.239472 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:15Z","lastTransitionTime":"2026-02-02T13:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.344419 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.344486 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.344513 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.344544 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.344604 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:15Z","lastTransitionTime":"2026-02-02T13:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.448061 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.448194 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.448222 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.448250 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.448274 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:15Z","lastTransitionTime":"2026-02-02T13:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.486948 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 23:05:40.303906148 +0000 UTC Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.551018 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.551062 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.551087 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.551103 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.551113 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:15Z","lastTransitionTime":"2026-02-02T13:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.654102 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.654182 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.654219 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.654252 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.654277 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:15Z","lastTransitionTime":"2026-02-02T13:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.715817 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.715830 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:15 crc kubenswrapper[4955]: E0202 13:04:15.716013 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:15 crc kubenswrapper[4955]: E0202 13:04:15.716086 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.756647 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.756688 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.756702 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.756719 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.756730 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:15Z","lastTransitionTime":"2026-02-02T13:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.859223 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.859272 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.859288 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.859308 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.859325 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:15Z","lastTransitionTime":"2026-02-02T13:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.962145 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.962191 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.962203 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.962220 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:15 crc kubenswrapper[4955]: I0202 13:04:15.962230 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:15Z","lastTransitionTime":"2026-02-02T13:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.064374 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.064416 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.064426 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.064438 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.064447 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:16Z","lastTransitionTime":"2026-02-02T13:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.167323 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.167366 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.167376 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.167393 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.167404 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:16Z","lastTransitionTime":"2026-02-02T13:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.270576 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.270624 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.270635 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.270652 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.270664 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:16Z","lastTransitionTime":"2026-02-02T13:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.373415 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.373483 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.373507 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.373614 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.373643 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:16Z","lastTransitionTime":"2026-02-02T13:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.475700 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.475739 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.475753 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.475767 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.475778 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:16Z","lastTransitionTime":"2026-02-02T13:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.487298 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 18:06:10.269107696 +0000 UTC Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.579183 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.579272 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.579302 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.579333 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.579356 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:16Z","lastTransitionTime":"2026-02-02T13:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.682428 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.682498 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.682521 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.682550 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.682620 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:16Z","lastTransitionTime":"2026-02-02T13:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.716738 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:16 crc kubenswrapper[4955]: E0202 13:04:16.716877 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.717097 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:16 crc kubenswrapper[4955]: E0202 13:04:16.717169 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.784998 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.785043 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.785053 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.785067 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.785075 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:16Z","lastTransitionTime":"2026-02-02T13:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.887397 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.887444 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.887456 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.887471 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.887480 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:16Z","lastTransitionTime":"2026-02-02T13:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.990174 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.990226 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.990239 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.990256 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:16 crc kubenswrapper[4955]: I0202 13:04:16.990268 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:16Z","lastTransitionTime":"2026-02-02T13:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.093105 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.093164 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.093176 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.093191 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.093200 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:17Z","lastTransitionTime":"2026-02-02T13:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.195153 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.195218 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.195241 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.195268 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.195285 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:17Z","lastTransitionTime":"2026-02-02T13:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.298251 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.298341 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.298374 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.298405 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.298424 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:17Z","lastTransitionTime":"2026-02-02T13:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.401877 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.401926 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.401946 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.401969 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.401980 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:17Z","lastTransitionTime":"2026-02-02T13:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.487759 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 12:40:10.91679196 +0000 UTC Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.505295 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.505332 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.505342 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.505358 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.505370 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:17Z","lastTransitionTime":"2026-02-02T13:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.572753 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.572805 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.572817 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.572835 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.572846 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:04:17Z","lastTransitionTime":"2026-02-02T13:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.619575 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-8dk2b"] Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.619980 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8dk2b" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.622294 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.622517 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.622536 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.622700 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.636197 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-dxh2p" podStartSLOduration=82.636117191 podStartE2EDuration="1m22.636117191s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:04:17.635391733 +0000 UTC m=+108.547728283" watchObservedRunningTime="2026-02-02 13:04:17.636117191 +0000 UTC m=+108.548453681" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.659868 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=57.659809345 podStartE2EDuration="57.659809345s" podCreationTimestamp="2026-02-02 13:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:04:17.659523878 +0000 UTC m=+108.571860348" watchObservedRunningTime="2026-02-02 13:04:17.659809345 +0000 UTC m=+108.572145815" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.661239 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c06129cd-c625-4965-bdb3-f7d2a87ad45c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8dk2b\" (UID: \"c06129cd-c625-4965-bdb3-f7d2a87ad45c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8dk2b" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.661271 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c06129cd-c625-4965-bdb3-f7d2a87ad45c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8dk2b\" (UID: \"c06129cd-c625-4965-bdb3-f7d2a87ad45c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8dk2b" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.661296 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c06129cd-c625-4965-bdb3-f7d2a87ad45c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8dk2b\" (UID: \"c06129cd-c625-4965-bdb3-f7d2a87ad45c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8dk2b" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.661310 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-crzll" podStartSLOduration=82.661302563 podStartE2EDuration="1m22.661302563s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:04:17.648206244 +0000 UTC m=+108.560542714" watchObservedRunningTime="2026-02-02 13:04:17.661302563 +0000 UTC m=+108.573639023" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.661328 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c06129cd-c625-4965-bdb3-f7d2a87ad45c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8dk2b\" (UID: \"c06129cd-c625-4965-bdb3-f7d2a87ad45c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8dk2b" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.661540 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c06129cd-c625-4965-bdb3-f7d2a87ad45c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8dk2b\" (UID: \"c06129cd-c625-4965-bdb3-f7d2a87ad45c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8dk2b" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.669263 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=27.669245152 podStartE2EDuration="27.669245152s" podCreationTimestamp="2026-02-02 13:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:04:17.66916957 +0000 UTC m=+108.581506020" watchObservedRunningTime="2026-02-02 13:04:17.669245152 +0000 UTC m=+108.581581612" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.716148 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.716174 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:17 crc kubenswrapper[4955]: E0202 13:04:17.716256 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:17 crc kubenswrapper[4955]: E0202 13:04:17.716342 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.728845 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=28.728831036 podStartE2EDuration="28.728831036s" podCreationTimestamp="2026-02-02 13:03:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:04:17.728505548 +0000 UTC m=+108.640841998" watchObservedRunningTime="2026-02-02 13:04:17.728831036 +0000 UTC m=+108.641167486" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.762818 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c06129cd-c625-4965-bdb3-f7d2a87ad45c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8dk2b\" (UID: \"c06129cd-c625-4965-bdb3-f7d2a87ad45c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8dk2b" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.762880 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c06129cd-c625-4965-bdb3-f7d2a87ad45c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8dk2b\" (UID: \"c06129cd-c625-4965-bdb3-f7d2a87ad45c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8dk2b" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.762910 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c06129cd-c625-4965-bdb3-f7d2a87ad45c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8dk2b\" (UID: \"c06129cd-c625-4965-bdb3-f7d2a87ad45c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8dk2b" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.762934 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c06129cd-c625-4965-bdb3-f7d2a87ad45c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8dk2b\" (UID: \"c06129cd-c625-4965-bdb3-f7d2a87ad45c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8dk2b" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.762956 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c06129cd-c625-4965-bdb3-f7d2a87ad45c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8dk2b\" (UID: \"c06129cd-c625-4965-bdb3-f7d2a87ad45c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8dk2b" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.763035 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c06129cd-c625-4965-bdb3-f7d2a87ad45c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8dk2b\" (UID: \"c06129cd-c625-4965-bdb3-f7d2a87ad45c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8dk2b" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.763777 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c06129cd-c625-4965-bdb3-f7d2a87ad45c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8dk2b\" (UID: \"c06129cd-c625-4965-bdb3-f7d2a87ad45c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8dk2b" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.763847 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c06129cd-c625-4965-bdb3-f7d2a87ad45c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8dk2b\" (UID: \"c06129cd-c625-4965-bdb3-f7d2a87ad45c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8dk2b" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.775985 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c06129cd-c625-4965-bdb3-f7d2a87ad45c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8dk2b\" (UID: \"c06129cd-c625-4965-bdb3-f7d2a87ad45c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8dk2b" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.779159 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c06129cd-c625-4965-bdb3-f7d2a87ad45c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8dk2b\" (UID: \"c06129cd-c625-4965-bdb3-f7d2a87ad45c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8dk2b" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.831882 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podStartSLOduration=82.83186519 podStartE2EDuration="1m22.83186519s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:04:17.831171222 +0000 UTC m=+108.743507662" watchObservedRunningTime="2026-02-02 13:04:17.83186519 +0000 UTC m=+108.744201640" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.843875 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2cq7v" podStartSLOduration=81.84385644 podStartE2EDuration="1m21.84385644s" podCreationTimestamp="2026-02-02 13:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:04:17.84341509 +0000 UTC m=+108.755751540" watchObservedRunningTime="2026-02-02 13:04:17.84385644 +0000 UTC m=+108.756192890" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.873680 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.873664619 podStartE2EDuration="1m28.873664619s" podCreationTimestamp="2026-02-02 13:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:04:17.858942209 +0000 UTC m=+108.771278659" watchObservedRunningTime="2026-02-02 13:04:17.873664619 +0000 UTC m=+108.786001069" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.884925 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=22.884907401 podStartE2EDuration="22.884907401s" podCreationTimestamp="2026-02-02 13:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:04:17.873151345 +0000 UTC m=+108.785487795" watchObservedRunningTime="2026-02-02 13:04:17.884907401 +0000 UTC m=+108.797243861" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.901260 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rplmq" podStartSLOduration=82.90124483 podStartE2EDuration="1m22.90124483s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:04:17.899596168 +0000 UTC m=+108.811932618" watchObservedRunningTime="2026-02-02 13:04:17.90124483 +0000 UTC m=+108.813581280" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.913780 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-7bpsz" podStartSLOduration=82.913761944 podStartE2EDuration="1m22.913761944s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:04:17.913485467 +0000 UTC m=+108.825821917" watchObservedRunningTime="2026-02-02 13:04:17.913761944 +0000 UTC m=+108.826098394" Feb 02 13:04:17 crc kubenswrapper[4955]: I0202 13:04:17.940149 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8dk2b" Feb 02 13:04:18 crc kubenswrapper[4955]: I0202 13:04:18.487954 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 07:44:18.351086921 +0000 UTC Feb 02 13:04:18 crc kubenswrapper[4955]: I0202 13:04:18.488298 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 02 13:04:18 crc kubenswrapper[4955]: I0202 13:04:18.496222 4955 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 13:04:18 crc kubenswrapper[4955]: I0202 13:04:18.715684 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:18 crc kubenswrapper[4955]: I0202 13:04:18.715722 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:18 crc kubenswrapper[4955]: E0202 13:04:18.715832 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:18 crc kubenswrapper[4955]: E0202 13:04:18.716199 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:18 crc kubenswrapper[4955]: I0202 13:04:18.759749 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8dk2b" event={"ID":"c06129cd-c625-4965-bdb3-f7d2a87ad45c","Type":"ContainerStarted","Data":"af6e767266e02f5b7482865294e4fce580dd2c4c2d1aeef32680eab01c41a0be"} Feb 02 13:04:18 crc kubenswrapper[4955]: I0202 13:04:18.759806 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8dk2b" event={"ID":"c06129cd-c625-4965-bdb3-f7d2a87ad45c","Type":"ContainerStarted","Data":"ee28f876d6acde87d6cd4f0d053ac9e6c6879244d9e16274de5ab309592b2173"} Feb 02 13:04:18 crc kubenswrapper[4955]: I0202 13:04:18.803021 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8dk2b" podStartSLOduration=83.802998846 podStartE2EDuration="1m23.802998846s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:04:18.797467517 +0000 UTC m=+109.709804017" watchObservedRunningTime="2026-02-02 13:04:18.802998846 +0000 UTC m=+109.715335336" Feb 02 13:04:19 crc kubenswrapper[4955]: I0202 13:04:19.715912 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:19 crc kubenswrapper[4955]: I0202 13:04:19.717025 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:19 crc kubenswrapper[4955]: E0202 13:04:19.717111 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:19 crc kubenswrapper[4955]: E0202 13:04:19.717253 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:19 crc kubenswrapper[4955]: I0202 13:04:19.717456 4955 scope.go:117] "RemoveContainer" containerID="5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e" Feb 02 13:04:19 crc kubenswrapper[4955]: E0202 13:04:19.717614 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z2cps_openshift-ovn-kubernetes(e0d35d22-ea6a-4ada-a086-b199c153c940)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" Feb 02 13:04:20 crc kubenswrapper[4955]: I0202 13:04:20.715897 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:20 crc kubenswrapper[4955]: I0202 13:04:20.715901 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:20 crc kubenswrapper[4955]: E0202 13:04:20.716175 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:20 crc kubenswrapper[4955]: E0202 13:04:20.716306 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:21 crc kubenswrapper[4955]: I0202 13:04:21.715437 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:21 crc kubenswrapper[4955]: E0202 13:04:21.715587 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:21 crc kubenswrapper[4955]: I0202 13:04:21.715438 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:21 crc kubenswrapper[4955]: E0202 13:04:21.715782 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:22 crc kubenswrapper[4955]: I0202 13:04:22.716064 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:22 crc kubenswrapper[4955]: I0202 13:04:22.716087 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:22 crc kubenswrapper[4955]: E0202 13:04:22.716302 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:22 crc kubenswrapper[4955]: E0202 13:04:22.716498 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:23 crc kubenswrapper[4955]: I0202 13:04:23.715924 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:23 crc kubenswrapper[4955]: I0202 13:04:23.716063 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:23 crc kubenswrapper[4955]: E0202 13:04:23.716161 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:23 crc kubenswrapper[4955]: E0202 13:04:23.716355 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:24 crc kubenswrapper[4955]: I0202 13:04:24.715375 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:24 crc kubenswrapper[4955]: E0202 13:04:24.715475 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:24 crc kubenswrapper[4955]: I0202 13:04:24.715375 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:24 crc kubenswrapper[4955]: E0202 13:04:24.715532 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:25 crc kubenswrapper[4955]: I0202 13:04:25.715318 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:25 crc kubenswrapper[4955]: I0202 13:04:25.715451 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:25 crc kubenswrapper[4955]: E0202 13:04:25.715662 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:25 crc kubenswrapper[4955]: E0202 13:04:25.716090 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:26 crc kubenswrapper[4955]: I0202 13:04:26.715342 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:26 crc kubenswrapper[4955]: I0202 13:04:26.715356 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:26 crc kubenswrapper[4955]: E0202 13:04:26.715469 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:26 crc kubenswrapper[4955]: E0202 13:04:26.715694 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:27 crc kubenswrapper[4955]: I0202 13:04:27.715689 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:27 crc kubenswrapper[4955]: I0202 13:04:27.715850 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:27 crc kubenswrapper[4955]: E0202 13:04:27.715977 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:27 crc kubenswrapper[4955]: E0202 13:04:27.716076 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:28 crc kubenswrapper[4955]: I0202 13:04:28.716336 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:28 crc kubenswrapper[4955]: I0202 13:04:28.716416 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:28 crc kubenswrapper[4955]: E0202 13:04:28.716804 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:28 crc kubenswrapper[4955]: E0202 13:04:28.716641 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:29 crc kubenswrapper[4955]: I0202 13:04:29.715720 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:29 crc kubenswrapper[4955]: I0202 13:04:29.715793 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:29 crc kubenswrapper[4955]: E0202 13:04:29.717408 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:29 crc kubenswrapper[4955]: E0202 13:04:29.717584 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:29 crc kubenswrapper[4955]: E0202 13:04:29.741915 4955 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 02 13:04:29 crc kubenswrapper[4955]: E0202 13:04:29.809041 4955 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 13:04:30 crc kubenswrapper[4955]: I0202 13:04:30.716156 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:30 crc kubenswrapper[4955]: I0202 13:04:30.716286 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:30 crc kubenswrapper[4955]: E0202 13:04:30.716443 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:30 crc kubenswrapper[4955]: E0202 13:04:30.716661 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:31 crc kubenswrapper[4955]: I0202 13:04:31.715612 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:31 crc kubenswrapper[4955]: I0202 13:04:31.715698 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:31 crc kubenswrapper[4955]: E0202 13:04:31.715768 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:31 crc kubenswrapper[4955]: E0202 13:04:31.715852 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:31 crc kubenswrapper[4955]: I0202 13:04:31.716913 4955 scope.go:117] "RemoveContainer" containerID="5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e" Feb 02 13:04:31 crc kubenswrapper[4955]: E0202 13:04:31.717139 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z2cps_openshift-ovn-kubernetes(e0d35d22-ea6a-4ada-a086-b199c153c940)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" Feb 02 13:04:32 crc kubenswrapper[4955]: I0202 13:04:32.715593 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:32 crc kubenswrapper[4955]: I0202 13:04:32.715667 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:32 crc kubenswrapper[4955]: E0202 13:04:32.715855 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:32 crc kubenswrapper[4955]: E0202 13:04:32.715998 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:33 crc kubenswrapper[4955]: I0202 13:04:33.715341 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:33 crc kubenswrapper[4955]: I0202 13:04:33.715361 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:33 crc kubenswrapper[4955]: E0202 13:04:33.715451 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:33 crc kubenswrapper[4955]: E0202 13:04:33.715656 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:34 crc kubenswrapper[4955]: I0202 13:04:34.715250 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:34 crc kubenswrapper[4955]: I0202 13:04:34.715348 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:34 crc kubenswrapper[4955]: E0202 13:04:34.715390 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:34 crc kubenswrapper[4955]: E0202 13:04:34.715513 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:34 crc kubenswrapper[4955]: E0202 13:04:34.810148 4955 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 13:04:34 crc kubenswrapper[4955]: I0202 13:04:34.813192 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7bpsz_93e471b4-0f7f-4216-8f9c-911f21b64e1e/kube-multus/1.log" Feb 02 13:04:34 crc kubenswrapper[4955]: I0202 13:04:34.813747 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7bpsz_93e471b4-0f7f-4216-8f9c-911f21b64e1e/kube-multus/0.log" Feb 02 13:04:34 crc kubenswrapper[4955]: I0202 13:04:34.813790 4955 generic.go:334] "Generic (PLEG): container finished" podID="93e471b4-0f7f-4216-8f9c-911f21b64e1e" containerID="a5cdc1e1f460fc68836a837b81dca1dec0597e760917853b09087a008ecdf8cb" exitCode=1 Feb 02 13:04:34 crc kubenswrapper[4955]: I0202 13:04:34.813814 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7bpsz" event={"ID":"93e471b4-0f7f-4216-8f9c-911f21b64e1e","Type":"ContainerDied","Data":"a5cdc1e1f460fc68836a837b81dca1dec0597e760917853b09087a008ecdf8cb"} Feb 02 13:04:34 crc kubenswrapper[4955]: I0202 13:04:34.813842 4955 scope.go:117] "RemoveContainer" containerID="7fb12a4dd303602cd50f0d57f2bd5b79d57b743df9ff21616e5a5bbe34303981" Feb 02 13:04:34 crc kubenswrapper[4955]: I0202 13:04:34.814178 4955 scope.go:117] "RemoveContainer" containerID="a5cdc1e1f460fc68836a837b81dca1dec0597e760917853b09087a008ecdf8cb" Feb 02 13:04:34 crc kubenswrapper[4955]: E0202 13:04:34.814357 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-7bpsz_openshift-multus(93e471b4-0f7f-4216-8f9c-911f21b64e1e)\"" pod="openshift-multus/multus-7bpsz" podUID="93e471b4-0f7f-4216-8f9c-911f21b64e1e" Feb 02 13:04:35 crc kubenswrapper[4955]: I0202 13:04:35.715487 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:35 crc kubenswrapper[4955]: I0202 13:04:35.715529 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:35 crc kubenswrapper[4955]: E0202 13:04:35.716119 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:35 crc kubenswrapper[4955]: E0202 13:04:35.716286 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:35 crc kubenswrapper[4955]: I0202 13:04:35.818692 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7bpsz_93e471b4-0f7f-4216-8f9c-911f21b64e1e/kube-multus/1.log" Feb 02 13:04:36 crc kubenswrapper[4955]: I0202 13:04:36.716117 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:36 crc kubenswrapper[4955]: E0202 13:04:36.716254 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:36 crc kubenswrapper[4955]: I0202 13:04:36.716448 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:36 crc kubenswrapper[4955]: E0202 13:04:36.716536 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:37 crc kubenswrapper[4955]: I0202 13:04:37.715678 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:37 crc kubenswrapper[4955]: E0202 13:04:37.715835 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:37 crc kubenswrapper[4955]: I0202 13:04:37.715897 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:37 crc kubenswrapper[4955]: E0202 13:04:37.716108 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:38 crc kubenswrapper[4955]: I0202 13:04:38.715824 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:38 crc kubenswrapper[4955]: I0202 13:04:38.715856 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:38 crc kubenswrapper[4955]: E0202 13:04:38.715978 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:38 crc kubenswrapper[4955]: E0202 13:04:38.716103 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:39 crc kubenswrapper[4955]: I0202 13:04:39.715781 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:39 crc kubenswrapper[4955]: E0202 13:04:39.718175 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:39 crc kubenswrapper[4955]: I0202 13:04:39.718543 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:39 crc kubenswrapper[4955]: E0202 13:04:39.718802 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:39 crc kubenswrapper[4955]: E0202 13:04:39.811171 4955 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 13:04:40 crc kubenswrapper[4955]: I0202 13:04:40.715816 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:40 crc kubenswrapper[4955]: I0202 13:04:40.715940 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:40 crc kubenswrapper[4955]: E0202 13:04:40.716028 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:40 crc kubenswrapper[4955]: E0202 13:04:40.716217 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:41 crc kubenswrapper[4955]: I0202 13:04:41.716072 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:41 crc kubenswrapper[4955]: I0202 13:04:41.716146 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:41 crc kubenswrapper[4955]: E0202 13:04:41.716247 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:41 crc kubenswrapper[4955]: E0202 13:04:41.716481 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:42 crc kubenswrapper[4955]: I0202 13:04:42.715241 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:42 crc kubenswrapper[4955]: I0202 13:04:42.715266 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:42 crc kubenswrapper[4955]: E0202 13:04:42.715420 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:42 crc kubenswrapper[4955]: E0202 13:04:42.715546 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:43 crc kubenswrapper[4955]: I0202 13:04:43.715535 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:43 crc kubenswrapper[4955]: I0202 13:04:43.715535 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:43 crc kubenswrapper[4955]: E0202 13:04:43.715803 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:43 crc kubenswrapper[4955]: E0202 13:04:43.715989 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:44 crc kubenswrapper[4955]: I0202 13:04:44.715924 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:44 crc kubenswrapper[4955]: E0202 13:04:44.716076 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:44 crc kubenswrapper[4955]: I0202 13:04:44.716830 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:44 crc kubenswrapper[4955]: E0202 13:04:44.717025 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:44 crc kubenswrapper[4955]: I0202 13:04:44.717945 4955 scope.go:117] "RemoveContainer" containerID="5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e" Feb 02 13:04:44 crc kubenswrapper[4955]: E0202 13:04:44.812992 4955 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 13:04:45 crc kubenswrapper[4955]: I0202 13:04:45.616939 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hjcmj"] Feb 02 13:04:45 crc kubenswrapper[4955]: I0202 13:04:45.617069 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:45 crc kubenswrapper[4955]: E0202 13:04:45.617168 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:45 crc kubenswrapper[4955]: I0202 13:04:45.715742 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:45 crc kubenswrapper[4955]: E0202 13:04:45.716171 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:45 crc kubenswrapper[4955]: I0202 13:04:45.716832 4955 scope.go:117] "RemoveContainer" containerID="a5cdc1e1f460fc68836a837b81dca1dec0597e760917853b09087a008ecdf8cb" Feb 02 13:04:45 crc kubenswrapper[4955]: I0202 13:04:45.853083 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z2cps_e0d35d22-ea6a-4ada-a086-b199c153c940/ovnkube-controller/3.log" Feb 02 13:04:45 crc kubenswrapper[4955]: I0202 13:04:45.855729 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerStarted","Data":"2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381"} Feb 02 13:04:45 crc kubenswrapper[4955]: I0202 13:04:45.856119 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:04:45 crc kubenswrapper[4955]: I0202 13:04:45.856874 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7bpsz_93e471b4-0f7f-4216-8f9c-911f21b64e1e/kube-multus/1.log" Feb 02 13:04:45 crc kubenswrapper[4955]: I0202 13:04:45.856915 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7bpsz" event={"ID":"93e471b4-0f7f-4216-8f9c-911f21b64e1e","Type":"ContainerStarted","Data":"6c3e909a5c1d539466ff95169b2a61636dd2e41a3596e08414bd64392d29dd9f"} Feb 02 13:04:45 crc kubenswrapper[4955]: I0202 13:04:45.883716 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" podStartSLOduration=110.883693004 podStartE2EDuration="1m50.883693004s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:04:45.883396407 +0000 UTC m=+136.795732857" watchObservedRunningTime="2026-02-02 13:04:45.883693004 +0000 UTC m=+136.796029454" Feb 02 13:04:46 crc kubenswrapper[4955]: I0202 13:04:46.715521 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:46 crc kubenswrapper[4955]: I0202 13:04:46.715593 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:46 crc kubenswrapper[4955]: I0202 13:04:46.715529 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:46 crc kubenswrapper[4955]: E0202 13:04:46.715696 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:46 crc kubenswrapper[4955]: E0202 13:04:46.715801 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:46 crc kubenswrapper[4955]: E0202 13:04:46.715891 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:47 crc kubenswrapper[4955]: I0202 13:04:47.716308 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:47 crc kubenswrapper[4955]: E0202 13:04:47.716484 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:48 crc kubenswrapper[4955]: I0202 13:04:48.715873 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:48 crc kubenswrapper[4955]: I0202 13:04:48.715944 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:48 crc kubenswrapper[4955]: E0202 13:04:48.716084 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:04:48 crc kubenswrapper[4955]: E0202 13:04:48.716250 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjcmj" podUID="009c80d7-da9c-46cc-b0d2-570de04e6510" Feb 02 13:04:48 crc kubenswrapper[4955]: I0202 13:04:48.715873 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:48 crc kubenswrapper[4955]: E0202 13:04:48.716398 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:04:49 crc kubenswrapper[4955]: I0202 13:04:49.715631 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:49 crc kubenswrapper[4955]: E0202 13:04:49.716476 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:04:50 crc kubenswrapper[4955]: I0202 13:04:50.715964 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:50 crc kubenswrapper[4955]: I0202 13:04:50.716290 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:04:50 crc kubenswrapper[4955]: I0202 13:04:50.716876 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:50 crc kubenswrapper[4955]: I0202 13:04:50.719461 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 13:04:50 crc kubenswrapper[4955]: I0202 13:04:50.719805 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 13:04:50 crc kubenswrapper[4955]: I0202 13:04:50.719988 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 13:04:50 crc kubenswrapper[4955]: I0202 13:04:50.720064 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 13:04:50 crc kubenswrapper[4955]: I0202 13:04:50.720416 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 13:04:50 crc kubenswrapper[4955]: I0202 13:04:50.721473 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 13:04:51 crc kubenswrapper[4955]: I0202 13:04:51.715848 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:53 crc kubenswrapper[4955]: I0202 13:04:53.190149 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:04:57 crc kubenswrapper[4955]: I0202 13:04:57.794734 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:04:57 crc kubenswrapper[4955]: I0202 13:04:57.794947 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:57 crc kubenswrapper[4955]: I0202 13:04:57.795055 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:57 crc kubenswrapper[4955]: I0202 13:04:57.795113 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:57 crc kubenswrapper[4955]: E0202 13:04:57.795184 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:06:59.795104656 +0000 UTC m=+270.707441146 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:04:57 crc kubenswrapper[4955]: I0202 13:04:57.795347 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:57 crc kubenswrapper[4955]: I0202 13:04:57.796712 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:57 crc kubenswrapper[4955]: I0202 13:04:57.803620 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:57 crc kubenswrapper[4955]: I0202 13:04:57.805452 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:57 crc kubenswrapper[4955]: I0202 13:04:57.808115 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:57 crc kubenswrapper[4955]: I0202 13:04:57.938854 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:57 crc kubenswrapper[4955]: I0202 13:04:57.967270 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.044955 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:04:58 crc kubenswrapper[4955]: W0202 13:04:58.431260 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-ef6335465ead84484afb396a5cad0774acfffec1fb1c8afc76b2f157eeca37e1 WatchSource:0}: Error finding container ef6335465ead84484afb396a5cad0774acfffec1fb1c8afc76b2f157eeca37e1: Status 404 returned error can't find the container with id ef6335465ead84484afb396a5cad0774acfffec1fb1c8afc76b2f157eeca37e1 Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.814889 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.848889 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g44jq"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.849420 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tkkbx"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.849835 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tkkbx" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.850264 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.858613 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.858753 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.858845 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.858965 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.859246 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.859310 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.859425 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.859600 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.859754 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.859910 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.860074 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.860180 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-v9znw"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.860431 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.860636 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.860684 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9znw" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.860731 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.867757 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.869953 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.870449 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mj5sp"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.871355 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mj5sp" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.871743 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.873246 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.887027 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.887132 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.888759 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.888929 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.889071 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.889212 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.889398 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.889747 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.890002 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.890339 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g2vv4"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.890555 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.890704 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.890738 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.892325 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.892649 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.892923 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.893107 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.893628 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.893793 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.893792 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.894022 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.894182 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.894612 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.894659 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.896756 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.897319 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.901574 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlcfp"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.902732 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlcfp" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.903102 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"cc08510b29fd090f0abcdf7237f246d1fac82d7d58907c59af328dc55c136c75"} Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.903150 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f636c6589cd26901507c41863501b7d498d12a11825d4947a0b2639225ce3653"} Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.903233 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dmwz5"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.903805 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.903880 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.905085 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.908677 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9a5d3e7639c65f9bf80bdc88f35a74f65c154bc5439df7e00455429df06a620e"} Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.908720 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ab7510af71c4ae4fe4c93c20b1a9ca28d3220dd6ddb99a0985225b5cbca28843"} Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.908944 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-6tx72"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.909242 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.909488 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a08315a043fb35e74af06631fe227c32804a0cacae221865ecb2c099f963262c"} Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.909523 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ef6335465ead84484afb396a5cad0774acfffec1fb1c8afc76b2f157eeca37e1"} Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.910143 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.912845 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.913155 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.913412 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.913419 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.913618 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.913703 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.913796 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.913813 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.916953 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.918377 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.922546 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-q7gnb"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.922977 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-q7gnb" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.930649 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.931535 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.934312 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-59w8t"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.937658 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mbssd"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.938037 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tkkbx"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.938062 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-psbd9"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.938548 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.938588 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-psbd9" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.938809 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.939035 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-59w8t" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.939196 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.939279 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-mbssd" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.945823 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.946126 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.946280 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.946480 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.946657 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.946808 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.946949 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.947099 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.947221 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.947352 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.947491 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.947777 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.947990 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.948125 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.948246 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.948370 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.948503 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.949087 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.949472 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.950171 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.950399 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.950632 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.950667 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ptgb8"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.951092 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.952494 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.952595 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.952737 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.953940 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.955169 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ptgb8" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.973501 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g44jq"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.973552 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9vptd"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.974112 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-9vptd" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.974763 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.975106 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.975222 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.975291 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.975368 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.975545 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.975662 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.975988 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.976102 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.976138 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.976200 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.976107 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.976250 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.976303 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5gxr"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.976618 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-bjmqj"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.976700 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.976859 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg9pb"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.976958 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5gxr" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.977143 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg9pb" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.977244 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-bjmqj" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.977318 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.977757 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.977789 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.978051 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lb4b"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.978688 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lb4b" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.979253 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfhzm"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.979697 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfhzm" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.980683 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xqqwk"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.980962 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xqqwk" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.984998 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mj5sp"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.985042 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-m8vvr"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.985591 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m8vvr" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.985781 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zbp7g"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.986203 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zbp7g" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.986888 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vfbjc"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.987601 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xh9w4"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.987618 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vfbjc" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.987692 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.988204 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xh9w4" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.988492 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpmfq"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.988864 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.989359 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpmfq" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.989832 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.990504 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2lgxn"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.991159 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2lgxn" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.992781 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.993364 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mrjsw"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.993792 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mrjsw" Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.997707 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pcs"] Feb 02 13:04:58 crc kubenswrapper[4955]: I0202 13:04:58.998370 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cm2xp"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.016495 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pcs" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.016909 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e88a945d-172b-40d3-938d-444a4d65bf11-etcd-serving-ca\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.016956 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.016984 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/de431f83-45ff-443c-b87c-d7ac12a3d71f-encryption-config\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017015 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-oauth-serving-cert\") pod \"console-f9d7485db-6tx72\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017047 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/586f9380-1574-4d6b-847d-d775fc1508b0-client-ca\") pod \"controller-manager-879f6c89f-g2vv4\" (UID: \"586f9380-1574-4d6b-847d-d775fc1508b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017066 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017088 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1923bcc-1d3a-4205-807e-fdf37f3b08ea-config\") pod \"machine-api-operator-5694c8668f-tkkbx\" (UID: \"a1923bcc-1d3a-4205-807e-fdf37f3b08ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tkkbx" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017114 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a1923bcc-1d3a-4205-807e-fdf37f3b08ea-images\") pod \"machine-api-operator-5694c8668f-tkkbx\" (UID: \"a1923bcc-1d3a-4205-807e-fdf37f3b08ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tkkbx" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017141 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ddec21a9-43c9-4885-abde-9e65c9a8762d-audit-dir\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017164 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017188 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e88a945d-172b-40d3-938d-444a4d65bf11-image-import-ca\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017221 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e89c5a55-99d3-4005-b90c-71041477fb75-serving-cert\") pod \"authentication-operator-69f744f599-mj5sp\" (UID: \"e89c5a55-99d3-4005-b90c-71041477fb75\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mj5sp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017246 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38a70ee1-d1f0-4373-93b5-2132600f76b6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mlcfp\" (UID: \"38a70ee1-d1f0-4373-93b5-2132600f76b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlcfp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017289 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017316 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-console-oauth-config\") pod \"console-f9d7485db-6tx72\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017502 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ncj99"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017618 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n7k5\" (UniqueName: \"kubernetes.io/projected/aa2e4282-fadd-4ef2-a933-ca151ce9acde-kube-api-access-2n7k5\") pod \"console-operator-58897d9998-q7gnb\" (UID: \"aa2e4282-fadd-4ef2-a933-ca151ce9acde\") " pod="openshift-console-operator/console-operator-58897d9998-q7gnb" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017648 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e88a945d-172b-40d3-938d-444a4d65bf11-node-pullsecrets\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017669 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-trusted-ca-bundle\") pod \"console-f9d7485db-6tx72\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017696 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea7da83a-3612-415d-9d5b-4684e1d38cde-auth-proxy-config\") pod \"machine-approver-56656f9798-v9znw\" (UID: \"ea7da83a-3612-415d-9d5b-4684e1d38cde\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9znw" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017729 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017758 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/de431f83-45ff-443c-b87c-d7ac12a3d71f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017775 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de431f83-45ff-443c-b87c-d7ac12a3d71f-audit-dir\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017823 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017850 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e88a945d-172b-40d3-938d-444a4d65bf11-config\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017872 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1923bcc-1d3a-4205-807e-fdf37f3b08ea-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tkkbx\" (UID: \"a1923bcc-1d3a-4205-807e-fdf37f3b08ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tkkbx" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017904 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/586f9380-1574-4d6b-847d-d775fc1508b0-serving-cert\") pod \"controller-manager-879f6c89f-g2vv4\" (UID: \"586f9380-1574-4d6b-847d-d775fc1508b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017933 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-service-ca\") pod \"console-f9d7485db-6tx72\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.017986 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.018101 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r6pt\" (UniqueName: \"kubernetes.io/projected/a1923bcc-1d3a-4205-807e-fdf37f3b08ea-kube-api-access-5r6pt\") pod \"machine-api-operator-5694c8668f-tkkbx\" (UID: \"a1923bcc-1d3a-4205-807e-fdf37f3b08ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tkkbx" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.018194 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7njn4\" (UniqueName: \"kubernetes.io/projected/586f9380-1574-4d6b-847d-d775fc1508b0-kube-api-access-7njn4\") pod \"controller-manager-879f6c89f-g2vv4\" (UID: \"586f9380-1574-4d6b-847d-d775fc1508b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.018248 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cm2xp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.018256 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db4r7\" (UniqueName: \"kubernetes.io/projected/ea7da83a-3612-415d-9d5b-4684e1d38cde-kube-api-access-db4r7\") pod \"machine-approver-56656f9798-v9znw\" (UID: \"ea7da83a-3612-415d-9d5b-4684e1d38cde\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9znw" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.022751 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ncj99" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.022770 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.022827 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de431f83-45ff-443c-b87c-d7ac12a3d71f-serving-cert\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.022858 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e88a945d-172b-40d3-938d-444a4d65bf11-audit\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.022889 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e88a945d-172b-40d3-938d-444a4d65bf11-audit-dir\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.022916 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxd6c\" (UniqueName: \"kubernetes.io/projected/38a70ee1-d1f0-4373-93b5-2132600f76b6-kube-api-access-sxd6c\") pod \"openshift-apiserver-operator-796bbdcf4f-mlcfp\" (UID: \"38a70ee1-d1f0-4373-93b5-2132600f76b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlcfp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.022653 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vbhpf"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023022 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc4pf\" (UniqueName: \"kubernetes.io/projected/e89c5a55-99d3-4005-b90c-71041477fb75-kube-api-access-zc4pf\") pod \"authentication-operator-69f744f599-mj5sp\" (UID: \"e89c5a55-99d3-4005-b90c-71041477fb75\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mj5sp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023051 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023087 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/de431f83-45ff-443c-b87c-d7ac12a3d71f-audit-policies\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023107 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e88a945d-172b-40d3-938d-444a4d65bf11-encryption-config\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023133 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jxzv\" (UniqueName: \"kubernetes.io/projected/e88a945d-172b-40d3-938d-444a4d65bf11-kube-api-access-8jxzv\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023161 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/135262fe-e63f-4d62-8260-4a90ee8c1f26-config\") pod \"route-controller-manager-6576b87f9c-9fbgh\" (UID: \"135262fe-e63f-4d62-8260-4a90ee8c1f26\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023188 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e89c5a55-99d3-4005-b90c-71041477fb75-config\") pod \"authentication-operator-69f744f599-mj5sp\" (UID: \"e89c5a55-99d3-4005-b90c-71041477fb75\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mj5sp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023213 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e89c5a55-99d3-4005-b90c-71041477fb75-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mj5sp\" (UID: \"e89c5a55-99d3-4005-b90c-71041477fb75\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mj5sp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023252 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bccvr\" (UniqueName: \"kubernetes.io/projected/de431f83-45ff-443c-b87c-d7ac12a3d71f-kube-api-access-bccvr\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023299 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023321 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e88a945d-172b-40d3-938d-444a4d65bf11-serving-cert\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023392 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e88a945d-172b-40d3-938d-444a4d65bf11-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023433 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e89c5a55-99d3-4005-b90c-71041477fb75-service-ca-bundle\") pod \"authentication-operator-69f744f599-mj5sp\" (UID: \"e89c5a55-99d3-4005-b90c-71041477fb75\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mj5sp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023463 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023496 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6vzd\" (UniqueName: \"kubernetes.io/projected/135262fe-e63f-4d62-8260-4a90ee8c1f26-kube-api-access-g6vzd\") pod \"route-controller-manager-6576b87f9c-9fbgh\" (UID: \"135262fe-e63f-4d62-8260-4a90ee8c1f26\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023538 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e88a945d-172b-40d3-938d-444a4d65bf11-etcd-client\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023596 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkvnb\" (UniqueName: \"kubernetes.io/projected/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-kube-api-access-wkvnb\") pod \"console-f9d7485db-6tx72\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023629 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023671 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/586f9380-1574-4d6b-847d-d775fc1508b0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-g2vv4\" (UID: \"586f9380-1574-4d6b-847d-d775fc1508b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023707 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa2e4282-fadd-4ef2-a933-ca151ce9acde-serving-cert\") pod \"console-operator-58897d9998-q7gnb\" (UID: \"aa2e4282-fadd-4ef2-a933-ca151ce9acde\") " pod="openshift-console-operator/console-operator-58897d9998-q7gnb" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023756 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea7da83a-3612-415d-9d5b-4684e1d38cde-config\") pod \"machine-approver-56656f9798-v9znw\" (UID: \"ea7da83a-3612-415d-9d5b-4684e1d38cde\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9znw" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023825 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd6dh\" (UniqueName: \"kubernetes.io/projected/ddec21a9-43c9-4885-abde-9e65c9a8762d-kube-api-access-rd6dh\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023864 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-console-config\") pod \"console-f9d7485db-6tx72\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023906 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ea7da83a-3612-415d-9d5b-4684e1d38cde-machine-approver-tls\") pod \"machine-approver-56656f9798-v9znw\" (UID: \"ea7da83a-3612-415d-9d5b-4684e1d38cde\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9znw" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023957 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g2vv4"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.023990 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rrdgr"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.025883 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vbhpf" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.024104 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-audit-policies\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.026172 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa2e4282-fadd-4ef2-a933-ca151ce9acde-trusted-ca\") pod \"console-operator-58897d9998-q7gnb\" (UID: \"aa2e4282-fadd-4ef2-a933-ca151ce9acde\") " pod="openshift-console-operator/console-operator-58897d9998-q7gnb" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.026191 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/135262fe-e63f-4d62-8260-4a90ee8c1f26-serving-cert\") pod \"route-controller-manager-6576b87f9c-9fbgh\" (UID: \"135262fe-e63f-4d62-8260-4a90ee8c1f26\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.026211 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38a70ee1-d1f0-4373-93b5-2132600f76b6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mlcfp\" (UID: \"38a70ee1-d1f0-4373-93b5-2132600f76b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlcfp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.026226 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de431f83-45ff-443c-b87c-d7ac12a3d71f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.026241 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa2e4282-fadd-4ef2-a933-ca151ce9acde-config\") pod \"console-operator-58897d9998-q7gnb\" (UID: \"aa2e4282-fadd-4ef2-a933-ca151ce9acde\") " pod="openshift-console-operator/console-operator-58897d9998-q7gnb" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.026307 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/de431f83-45ff-443c-b87c-d7ac12a3d71f-etcd-client\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.026345 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-console-serving-cert\") pod \"console-f9d7485db-6tx72\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.026387 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586f9380-1574-4d6b-847d-d775fc1508b0-config\") pod \"controller-manager-879f6c89f-g2vv4\" (UID: \"586f9380-1574-4d6b-847d-d775fc1508b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.026419 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/135262fe-e63f-4d62-8260-4a90ee8c1f26-client-ca\") pod \"route-controller-manager-6576b87f9c-9fbgh\" (UID: \"135262fe-e63f-4d62-8260-4a90ee8c1f26\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.026450 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500620-mzf7f"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.026590 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.027538 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9slxp"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.027864 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-mzf7f" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.027947 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlcfp"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.027965 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-74zm4"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.028158 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9slxp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.028329 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-74zm4" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.029264 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-46rd2"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.030087 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-q54rv"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.030742 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kpfmp"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.031689 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kpfmp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.031895 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-46rd2" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.032076 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-q54rv" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.032786 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.033922 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6tx72"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.034993 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dmwz5"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.037031 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.037217 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mbssd"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.038047 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-59w8t"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.039097 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9vptd"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.040145 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfhzm"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.041171 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-q7gnb"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.042227 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpmfq"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.043235 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pcs"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.044334 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ptgb8"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.045228 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-psbd9"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.046245 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg9pb"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.047287 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lb4b"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.048475 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vfbjc"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.050589 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ncj99"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.051910 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.052662 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-m8vvr"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.053775 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rrdgr"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.055357 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xqqwk"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.057115 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vbhpf"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.058481 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cm2xp"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.059611 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5gxr"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.060979 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mrjsw"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.062507 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xh9w4"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.063802 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-74zm4"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.065358 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-ddl6c"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.066013 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ddl6c" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.066502 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2lgxn"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.067789 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zbp7g"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.069015 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9mrfh"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.070128 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.070222 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-q54rv"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.071313 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500620-mzf7f"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.072502 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.072646 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9slxp"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.073641 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ddl6c"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.075068 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-46rd2"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.076101 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9mrfh"] Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.092093 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.112142 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127135 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/586f9380-1574-4d6b-847d-d775fc1508b0-serving-cert\") pod \"controller-manager-879f6c89f-g2vv4\" (UID: \"586f9380-1574-4d6b-847d-d775fc1508b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127168 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-service-ca\") pod \"console-f9d7485db-6tx72\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127192 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c7c59585-55a6-4686-a998-058c2228f134-apiservice-cert\") pod \"packageserver-d55dfcdfc-2lb4b\" (UID: \"c7c59585-55a6-4686-a998-058c2228f134\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lb4b" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127213 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r6pt\" (UniqueName: \"kubernetes.io/projected/a1923bcc-1d3a-4205-807e-fdf37f3b08ea-kube-api-access-5r6pt\") pod \"machine-api-operator-5694c8668f-tkkbx\" (UID: \"a1923bcc-1d3a-4205-807e-fdf37f3b08ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tkkbx" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127230 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db4r7\" (UniqueName: \"kubernetes.io/projected/ea7da83a-3612-415d-9d5b-4684e1d38cde-kube-api-access-db4r7\") pod \"machine-approver-56656f9798-v9znw\" (UID: \"ea7da83a-3612-415d-9d5b-4684e1d38cde\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9znw" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127248 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127264 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de431f83-45ff-443c-b87c-d7ac12a3d71f-serving-cert\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127279 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e88a945d-172b-40d3-938d-444a4d65bf11-audit\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127322 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e88a945d-172b-40d3-938d-444a4d65bf11-audit-dir\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127341 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxd6c\" (UniqueName: \"kubernetes.io/projected/38a70ee1-d1f0-4373-93b5-2132600f76b6-kube-api-access-sxd6c\") pod \"openshift-apiserver-operator-796bbdcf4f-mlcfp\" (UID: \"38a70ee1-d1f0-4373-93b5-2132600f76b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlcfp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127358 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7njn4\" (UniqueName: \"kubernetes.io/projected/586f9380-1574-4d6b-847d-d775fc1508b0-kube-api-access-7njn4\") pod \"controller-manager-879f6c89f-g2vv4\" (UID: \"586f9380-1574-4d6b-847d-d775fc1508b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127376 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dca84748-635f-4929-9259-e64bc022a883-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fg9pb\" (UID: \"dca84748-635f-4929-9259-e64bc022a883\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg9pb" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127396 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc4pf\" (UniqueName: \"kubernetes.io/projected/e89c5a55-99d3-4005-b90c-71041477fb75-kube-api-access-zc4pf\") pod \"authentication-operator-69f744f599-mj5sp\" (UID: \"e89c5a55-99d3-4005-b90c-71041477fb75\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mj5sp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127429 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c7c59585-55a6-4686-a998-058c2228f134-webhook-cert\") pod \"packageserver-d55dfcdfc-2lb4b\" (UID: \"c7c59585-55a6-4686-a998-058c2228f134\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lb4b" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127447 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/de431f83-45ff-443c-b87c-d7ac12a3d71f-audit-policies\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127481 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e88a945d-172b-40d3-938d-444a4d65bf11-encryption-config\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127499 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jxzv\" (UniqueName: \"kubernetes.io/projected/e88a945d-172b-40d3-938d-444a4d65bf11-kube-api-access-8jxzv\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127518 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/135262fe-e63f-4d62-8260-4a90ee8c1f26-config\") pod \"route-controller-manager-6576b87f9c-9fbgh\" (UID: \"135262fe-e63f-4d62-8260-4a90ee8c1f26\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127534 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e89c5a55-99d3-4005-b90c-71041477fb75-config\") pod \"authentication-operator-69f744f599-mj5sp\" (UID: \"e89c5a55-99d3-4005-b90c-71041477fb75\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mj5sp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127551 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e89c5a55-99d3-4005-b90c-71041477fb75-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mj5sp\" (UID: \"e89c5a55-99d3-4005-b90c-71041477fb75\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mj5sp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127595 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127611 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bccvr\" (UniqueName: \"kubernetes.io/projected/de431f83-45ff-443c-b87c-d7ac12a3d71f-kube-api-access-bccvr\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127627 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0-serving-cert\") pod \"service-ca-operator-777779d784-cm2xp\" (UID: \"4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cm2xp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127645 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127664 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e88a945d-172b-40d3-938d-444a4d65bf11-serving-cert\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127686 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e88a945d-172b-40d3-938d-444a4d65bf11-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127708 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e89c5a55-99d3-4005-b90c-71041477fb75-service-ca-bundle\") pod \"authentication-operator-69f744f599-mj5sp\" (UID: \"e89c5a55-99d3-4005-b90c-71041477fb75\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mj5sp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127730 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6vzd\" (UniqueName: \"kubernetes.io/projected/135262fe-e63f-4d62-8260-4a90ee8c1f26-kube-api-access-g6vzd\") pod \"route-controller-manager-6576b87f9c-9fbgh\" (UID: \"135262fe-e63f-4d62-8260-4a90ee8c1f26\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127746 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127763 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e88a945d-172b-40d3-938d-444a4d65bf11-etcd-client\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127778 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkvnb\" (UniqueName: \"kubernetes.io/projected/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-kube-api-access-wkvnb\") pod \"console-f9d7485db-6tx72\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127794 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhgxc\" (UniqueName: \"kubernetes.io/projected/e9e6e2cf-9009-4951-bd8d-7878af4bd041-kube-api-access-dhgxc\") pod \"downloads-7954f5f757-59w8t\" (UID: \"e9e6e2cf-9009-4951-bd8d-7878af4bd041\") " pod="openshift-console/downloads-7954f5f757-59w8t" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127810 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86z9g\" (UniqueName: \"kubernetes.io/projected/a5a8e507-be3d-4682-894b-249235ecc978-kube-api-access-86z9g\") pod \"cluster-samples-operator-665b6dd947-qfhzm\" (UID: \"a5a8e507-be3d-4682-894b-249235ecc978\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfhzm" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127828 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127847 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/586f9380-1574-4d6b-847d-d775fc1508b0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-g2vv4\" (UID: \"586f9380-1574-4d6b-847d-d775fc1508b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127862 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa2e4282-fadd-4ef2-a933-ca151ce9acde-serving-cert\") pod \"console-operator-58897d9998-q7gnb\" (UID: \"aa2e4282-fadd-4ef2-a933-ca151ce9acde\") " pod="openshift-console-operator/console-operator-58897d9998-q7gnb" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127878 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mp5x\" (UniqueName: \"kubernetes.io/projected/afc3bdeb-edeb-4acf-8e93-c72d471e5f49-kube-api-access-8mp5x\") pod \"package-server-manager-789f6589d5-vbhpf\" (UID: \"afc3bdeb-edeb-4acf-8e93-c72d471e5f49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vbhpf" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127893 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a5a8e507-be3d-4682-894b-249235ecc978-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qfhzm\" (UID: \"a5a8e507-be3d-4682-894b-249235ecc978\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfhzm" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127911 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea7da83a-3612-415d-9d5b-4684e1d38cde-config\") pod \"machine-approver-56656f9798-v9znw\" (UID: \"ea7da83a-3612-415d-9d5b-4684e1d38cde\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9znw" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127938 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd6dh\" (UniqueName: \"kubernetes.io/projected/ddec21a9-43c9-4885-abde-9e65c9a8762d-kube-api-access-rd6dh\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127958 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-console-config\") pod \"console-f9d7485db-6tx72\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127978 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ea7da83a-3612-415d-9d5b-4684e1d38cde-machine-approver-tls\") pod \"machine-approver-56656f9798-v9znw\" (UID: \"ea7da83a-3612-415d-9d5b-4684e1d38cde\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9znw" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.127999 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-audit-policies\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128017 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38a70ee1-d1f0-4373-93b5-2132600f76b6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mlcfp\" (UID: \"38a70ee1-d1f0-4373-93b5-2132600f76b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlcfp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128035 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa2e4282-fadd-4ef2-a933-ca151ce9acde-trusted-ca\") pod \"console-operator-58897d9998-q7gnb\" (UID: \"aa2e4282-fadd-4ef2-a933-ca151ce9acde\") " pod="openshift-console-operator/console-operator-58897d9998-q7gnb" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128054 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/135262fe-e63f-4d62-8260-4a90ee8c1f26-serving-cert\") pod \"route-controller-manager-6576b87f9c-9fbgh\" (UID: \"135262fe-e63f-4d62-8260-4a90ee8c1f26\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128074 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa2e4282-fadd-4ef2-a933-ca151ce9acde-config\") pod \"console-operator-58897d9998-q7gnb\" (UID: \"aa2e4282-fadd-4ef2-a933-ca151ce9acde\") " pod="openshift-console-operator/console-operator-58897d9998-q7gnb" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128095 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76pns\" (UniqueName: \"kubernetes.io/projected/cf79b3b7-5bd1-4877-b5ab-1141d969437c-kube-api-access-76pns\") pod \"machine-config-controller-84d6567774-zbp7g\" (UID: \"cf79b3b7-5bd1-4877-b5ab-1141d969437c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zbp7g" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128115 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de431f83-45ff-443c-b87c-d7ac12a3d71f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128135 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7ea12c47-cdd1-4fa0-aea1-8142e2754bb9-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xqqwk\" (UID: \"7ea12c47-cdd1-4fa0-aea1-8142e2754bb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xqqwk" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128154 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/de431f83-45ff-443c-b87c-d7ac12a3d71f-etcd-client\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128172 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-console-serving-cert\") pod \"console-f9d7485db-6tx72\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128192 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586f9380-1574-4d6b-847d-d775fc1508b0-config\") pod \"controller-manager-879f6c89f-g2vv4\" (UID: \"586f9380-1574-4d6b-847d-d775fc1508b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128209 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/135262fe-e63f-4d62-8260-4a90ee8c1f26-client-ca\") pod \"route-controller-manager-6576b87f9c-9fbgh\" (UID: \"135262fe-e63f-4d62-8260-4a90ee8c1f26\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128264 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e88a945d-172b-40d3-938d-444a4d65bf11-etcd-serving-ca\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128286 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/de431f83-45ff-443c-b87c-d7ac12a3d71f-encryption-config\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128305 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-oauth-serving-cert\") pod \"console-f9d7485db-6tx72\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128325 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128345 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/586f9380-1574-4d6b-847d-d775fc1508b0-client-ca\") pod \"controller-manager-879f6c89f-g2vv4\" (UID: \"586f9380-1574-4d6b-847d-d775fc1508b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128365 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128386 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1923bcc-1d3a-4205-807e-fdf37f3b08ea-config\") pod \"machine-api-operator-5694c8668f-tkkbx\" (UID: \"a1923bcc-1d3a-4205-807e-fdf37f3b08ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tkkbx" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128403 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a1923bcc-1d3a-4205-807e-fdf37f3b08ea-images\") pod \"machine-api-operator-5694c8668f-tkkbx\" (UID: \"a1923bcc-1d3a-4205-807e-fdf37f3b08ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tkkbx" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128422 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kh5z\" (UniqueName: \"kubernetes.io/projected/7ea12c47-cdd1-4fa0-aea1-8142e2754bb9-kube-api-access-7kh5z\") pod \"cluster-image-registry-operator-dc59b4c8b-xqqwk\" (UID: \"7ea12c47-cdd1-4fa0-aea1-8142e2754bb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xqqwk" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128441 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ddec21a9-43c9-4885-abde-9e65c9a8762d-audit-dir\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128460 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128484 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e88a945d-172b-40d3-938d-444a4d65bf11-image-import-ca\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128504 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqpzs\" (UniqueName: \"kubernetes.io/projected/dca84748-635f-4929-9259-e64bc022a883-kube-api-access-rqpzs\") pod \"kube-storage-version-migrator-operator-b67b599dd-fg9pb\" (UID: \"dca84748-635f-4929-9259-e64bc022a883\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg9pb" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128523 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cf79b3b7-5bd1-4877-b5ab-1141d969437c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zbp7g\" (UID: \"cf79b3b7-5bd1-4877-b5ab-1141d969437c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zbp7g" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128541 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e89c5a55-99d3-4005-b90c-71041477fb75-serving-cert\") pod \"authentication-operator-69f744f599-mj5sp\" (UID: \"e89c5a55-99d3-4005-b90c-71041477fb75\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mj5sp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128575 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38a70ee1-d1f0-4373-93b5-2132600f76b6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mlcfp\" (UID: \"38a70ee1-d1f0-4373-93b5-2132600f76b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlcfp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128599 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128617 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-console-oauth-config\") pod \"console-f9d7485db-6tx72\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128637 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7ea12c47-cdd1-4fa0-aea1-8142e2754bb9-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xqqwk\" (UID: \"7ea12c47-cdd1-4fa0-aea1-8142e2754bb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xqqwk" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128663 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/afc3bdeb-edeb-4acf-8e93-c72d471e5f49-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vbhpf\" (UID: \"afc3bdeb-edeb-4acf-8e93-c72d471e5f49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vbhpf" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128683 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e327a694-d78a-4a74-b353-dbcc4d4ce040-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-46rd2\" (UID: \"e327a694-d78a-4a74-b353-dbcc4d4ce040\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-46rd2" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128705 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n7k5\" (UniqueName: \"kubernetes.io/projected/aa2e4282-fadd-4ef2-a933-ca151ce9acde-kube-api-access-2n7k5\") pod \"console-operator-58897d9998-q7gnb\" (UID: \"aa2e4282-fadd-4ef2-a933-ca151ce9acde\") " pod="openshift-console-operator/console-operator-58897d9998-q7gnb" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128723 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e88a945d-172b-40d3-938d-444a4d65bf11-node-pullsecrets\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128741 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-trusted-ca-bundle\") pod \"console-f9d7485db-6tx72\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128761 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0-config\") pod \"service-ca-operator-777779d784-cm2xp\" (UID: \"4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cm2xp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128782 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128804 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea7da83a-3612-415d-9d5b-4684e1d38cde-auth-proxy-config\") pod \"machine-approver-56656f9798-v9znw\" (UID: \"ea7da83a-3612-415d-9d5b-4684e1d38cde\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9znw" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128826 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/de431f83-45ff-443c-b87c-d7ac12a3d71f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128844 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de431f83-45ff-443c-b87c-d7ac12a3d71f-audit-dir\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128864 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dca84748-635f-4929-9259-e64bc022a883-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fg9pb\" (UID: \"dca84748-635f-4929-9259-e64bc022a883\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg9pb" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128882 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c7c59585-55a6-4686-a998-058c2228f134-tmpfs\") pod \"packageserver-d55dfcdfc-2lb4b\" (UID: \"c7c59585-55a6-4686-a998-058c2228f134\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lb4b" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128902 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cf79b3b7-5bd1-4877-b5ab-1141d969437c-proxy-tls\") pod \"machine-config-controller-84d6567774-zbp7g\" (UID: \"cf79b3b7-5bd1-4877-b5ab-1141d969437c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zbp7g" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128923 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e88a945d-172b-40d3-938d-444a4d65bf11-config\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128942 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1923bcc-1d3a-4205-807e-fdf37f3b08ea-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tkkbx\" (UID: \"a1923bcc-1d3a-4205-807e-fdf37f3b08ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tkkbx" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128961 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ea12c47-cdd1-4fa0-aea1-8142e2754bb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xqqwk\" (UID: \"7ea12c47-cdd1-4fa0-aea1-8142e2754bb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xqqwk" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.128983 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6dxw\" (UniqueName: \"kubernetes.io/projected/4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0-kube-api-access-j6dxw\") pod \"service-ca-operator-777779d784-cm2xp\" (UID: \"4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cm2xp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.129001 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g89wx\" (UniqueName: \"kubernetes.io/projected/c7c59585-55a6-4686-a998-058c2228f134-kube-api-access-g89wx\") pod \"packageserver-d55dfcdfc-2lb4b\" (UID: \"c7c59585-55a6-4686-a998-058c2228f134\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lb4b" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.129019 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnhwr\" (UniqueName: \"kubernetes.io/projected/e327a694-d78a-4a74-b353-dbcc4d4ce040-kube-api-access-jnhwr\") pod \"multus-admission-controller-857f4d67dd-46rd2\" (UID: \"e327a694-d78a-4a74-b353-dbcc4d4ce040\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-46rd2" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.129040 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.132262 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.132401 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa2e4282-fadd-4ef2-a933-ca151ce9acde-config\") pod \"console-operator-58897d9998-q7gnb\" (UID: \"aa2e4282-fadd-4ef2-a933-ca151ce9acde\") " pod="openshift-console-operator/console-operator-58897d9998-q7gnb" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.132706 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de431f83-45ff-443c-b87c-d7ac12a3d71f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.133114 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-service-ca\") pod \"console-f9d7485db-6tx72\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.136346 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/586f9380-1574-4d6b-847d-d775fc1508b0-serving-cert\") pod \"controller-manager-879f6c89f-g2vv4\" (UID: \"586f9380-1574-4d6b-847d-d775fc1508b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.136497 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/de431f83-45ff-443c-b87c-d7ac12a3d71f-etcd-client\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.136588 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de431f83-45ff-443c-b87c-d7ac12a3d71f-audit-dir\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.136988 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/de431f83-45ff-443c-b87c-d7ac12a3d71f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.137201 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e88a945d-172b-40d3-938d-444a4d65bf11-node-pullsecrets\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.137259 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e88a945d-172b-40d3-938d-444a4d65bf11-config\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.138274 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-trusted-ca-bundle\") pod \"console-f9d7485db-6tx72\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.138846 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.139457 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea7da83a-3612-415d-9d5b-4684e1d38cde-auth-proxy-config\") pod \"machine-approver-56656f9798-v9znw\" (UID: \"ea7da83a-3612-415d-9d5b-4684e1d38cde\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9znw" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.140013 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-console-oauth-config\") pod \"console-f9d7485db-6tx72\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.140740 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a1923bcc-1d3a-4205-807e-fdf37f3b08ea-images\") pod \"machine-api-operator-5694c8668f-tkkbx\" (UID: \"a1923bcc-1d3a-4205-807e-fdf37f3b08ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tkkbx" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.140808 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ddec21a9-43c9-4885-abde-9e65c9a8762d-audit-dir\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.141294 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.142154 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e88a945d-172b-40d3-938d-444a4d65bf11-image-import-ca\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.143175 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1923bcc-1d3a-4205-807e-fdf37f3b08ea-config\") pod \"machine-api-operator-5694c8668f-tkkbx\" (UID: \"a1923bcc-1d3a-4205-807e-fdf37f3b08ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tkkbx" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.143215 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e88a945d-172b-40d3-938d-444a4d65bf11-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.144628 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.155330 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e88a945d-172b-40d3-938d-444a4d65bf11-audit\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.155816 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-console-serving-cert\") pod \"console-f9d7485db-6tx72\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.155866 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de431f83-45ff-443c-b87c-d7ac12a3d71f-serving-cert\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.156259 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e89c5a55-99d3-4005-b90c-71041477fb75-serving-cert\") pod \"authentication-operator-69f744f599-mj5sp\" (UID: \"e89c5a55-99d3-4005-b90c-71041477fb75\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mj5sp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.156326 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1923bcc-1d3a-4205-807e-fdf37f3b08ea-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tkkbx\" (UID: \"a1923bcc-1d3a-4205-807e-fdf37f3b08ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tkkbx" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.156576 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.158185 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/135262fe-e63f-4d62-8260-4a90ee8c1f26-config\") pod \"route-controller-manager-6576b87f9c-9fbgh\" (UID: \"135262fe-e63f-4d62-8260-4a90ee8c1f26\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.158695 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e89c5a55-99d3-4005-b90c-71041477fb75-config\") pod \"authentication-operator-69f744f599-mj5sp\" (UID: \"e89c5a55-99d3-4005-b90c-71041477fb75\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mj5sp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.158904 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e88a945d-172b-40d3-938d-444a4d65bf11-etcd-serving-ca\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.159425 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e89c5a55-99d3-4005-b90c-71041477fb75-service-ca-bundle\") pod \"authentication-operator-69f744f599-mj5sp\" (UID: \"e89c5a55-99d3-4005-b90c-71041477fb75\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mj5sp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.159728 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa2e4282-fadd-4ef2-a933-ca151ce9acde-trusted-ca\") pod \"console-operator-58897d9998-q7gnb\" (UID: \"aa2e4282-fadd-4ef2-a933-ca151ce9acde\") " pod="openshift-console-operator/console-operator-58897d9998-q7gnb" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.160468 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/586f9380-1574-4d6b-847d-d775fc1508b0-client-ca\") pod \"controller-manager-879f6c89f-g2vv4\" (UID: \"586f9380-1574-4d6b-847d-d775fc1508b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.160995 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e89c5a55-99d3-4005-b90c-71041477fb75-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mj5sp\" (UID: \"e89c5a55-99d3-4005-b90c-71041477fb75\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mj5sp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.162686 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38a70ee1-d1f0-4373-93b5-2132600f76b6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mlcfp\" (UID: \"38a70ee1-d1f0-4373-93b5-2132600f76b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlcfp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.163988 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa2e4282-fadd-4ef2-a933-ca151ce9acde-serving-cert\") pod \"console-operator-58897d9998-q7gnb\" (UID: \"aa2e4282-fadd-4ef2-a933-ca151ce9acde\") " pod="openshift-console-operator/console-operator-58897d9998-q7gnb" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.164296 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586f9380-1574-4d6b-847d-d775fc1508b0-config\") pod \"controller-manager-879f6c89f-g2vv4\" (UID: \"586f9380-1574-4d6b-847d-d775fc1508b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.164358 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-oauth-serving-cert\") pod \"console-f9d7485db-6tx72\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.164533 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e88a945d-172b-40d3-938d-444a4d65bf11-audit-dir\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.164727 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/de431f83-45ff-443c-b87c-d7ac12a3d71f-audit-policies\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.164884 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/586f9380-1574-4d6b-847d-d775fc1508b0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-g2vv4\" (UID: \"586f9380-1574-4d6b-847d-d775fc1508b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.165066 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/de431f83-45ff-443c-b87c-d7ac12a3d71f-encryption-config\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.165403 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.165527 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea7da83a-3612-415d-9d5b-4684e1d38cde-config\") pod \"machine-approver-56656f9798-v9znw\" (UID: \"ea7da83a-3612-415d-9d5b-4684e1d38cde\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9znw" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.165949 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/135262fe-e63f-4d62-8260-4a90ee8c1f26-client-ca\") pod \"route-controller-manager-6576b87f9c-9fbgh\" (UID: \"135262fe-e63f-4d62-8260-4a90ee8c1f26\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.166151 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.166646 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-console-config\") pod \"console-f9d7485db-6tx72\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.166831 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-audit-policies\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.166943 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ea7da83a-3612-415d-9d5b-4684e1d38cde-machine-approver-tls\") pod \"machine-approver-56656f9798-v9znw\" (UID: \"ea7da83a-3612-415d-9d5b-4684e1d38cde\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9znw" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.167256 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e88a945d-172b-40d3-938d-444a4d65bf11-encryption-config\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.167391 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.167625 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.167798 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38a70ee1-d1f0-4373-93b5-2132600f76b6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mlcfp\" (UID: \"38a70ee1-d1f0-4373-93b5-2132600f76b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlcfp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.168019 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.167847 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.170127 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e88a945d-172b-40d3-938d-444a4d65bf11-serving-cert\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.170385 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.171709 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e88a945d-172b-40d3-938d-444a4d65bf11-etcd-client\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.172339 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/135262fe-e63f-4d62-8260-4a90ee8c1f26-serving-cert\") pod \"route-controller-manager-6576b87f9c-9fbgh\" (UID: \"135262fe-e63f-4d62-8260-4a90ee8c1f26\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.173737 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.194173 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.212364 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229493 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76pns\" (UniqueName: \"kubernetes.io/projected/cf79b3b7-5bd1-4877-b5ab-1141d969437c-kube-api-access-76pns\") pod \"machine-config-controller-84d6567774-zbp7g\" (UID: \"cf79b3b7-5bd1-4877-b5ab-1141d969437c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zbp7g" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229526 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7ea12c47-cdd1-4fa0-aea1-8142e2754bb9-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xqqwk\" (UID: \"7ea12c47-cdd1-4fa0-aea1-8142e2754bb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xqqwk" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229550 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kh5z\" (UniqueName: \"kubernetes.io/projected/7ea12c47-cdd1-4fa0-aea1-8142e2754bb9-kube-api-access-7kh5z\") pod \"cluster-image-registry-operator-dc59b4c8b-xqqwk\" (UID: \"7ea12c47-cdd1-4fa0-aea1-8142e2754bb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xqqwk" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229593 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqpzs\" (UniqueName: \"kubernetes.io/projected/dca84748-635f-4929-9259-e64bc022a883-kube-api-access-rqpzs\") pod \"kube-storage-version-migrator-operator-b67b599dd-fg9pb\" (UID: \"dca84748-635f-4929-9259-e64bc022a883\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg9pb" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229609 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cf79b3b7-5bd1-4877-b5ab-1141d969437c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zbp7g\" (UID: \"cf79b3b7-5bd1-4877-b5ab-1141d969437c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zbp7g" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229625 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7ea12c47-cdd1-4fa0-aea1-8142e2754bb9-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xqqwk\" (UID: \"7ea12c47-cdd1-4fa0-aea1-8142e2754bb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xqqwk" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229642 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/afc3bdeb-edeb-4acf-8e93-c72d471e5f49-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vbhpf\" (UID: \"afc3bdeb-edeb-4acf-8e93-c72d471e5f49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vbhpf" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229657 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e327a694-d78a-4a74-b353-dbcc4d4ce040-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-46rd2\" (UID: \"e327a694-d78a-4a74-b353-dbcc4d4ce040\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-46rd2" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229674 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0-config\") pod \"service-ca-operator-777779d784-cm2xp\" (UID: \"4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cm2xp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229699 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dca84748-635f-4929-9259-e64bc022a883-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fg9pb\" (UID: \"dca84748-635f-4929-9259-e64bc022a883\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg9pb" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229713 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c7c59585-55a6-4686-a998-058c2228f134-tmpfs\") pod \"packageserver-d55dfcdfc-2lb4b\" (UID: \"c7c59585-55a6-4686-a998-058c2228f134\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lb4b" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229730 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cf79b3b7-5bd1-4877-b5ab-1141d969437c-proxy-tls\") pod \"machine-config-controller-84d6567774-zbp7g\" (UID: \"cf79b3b7-5bd1-4877-b5ab-1141d969437c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zbp7g" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229745 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g89wx\" (UniqueName: \"kubernetes.io/projected/c7c59585-55a6-4686-a998-058c2228f134-kube-api-access-g89wx\") pod \"packageserver-d55dfcdfc-2lb4b\" (UID: \"c7c59585-55a6-4686-a998-058c2228f134\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lb4b" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229759 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnhwr\" (UniqueName: \"kubernetes.io/projected/e327a694-d78a-4a74-b353-dbcc4d4ce040-kube-api-access-jnhwr\") pod \"multus-admission-controller-857f4d67dd-46rd2\" (UID: \"e327a694-d78a-4a74-b353-dbcc4d4ce040\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-46rd2" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229776 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ea12c47-cdd1-4fa0-aea1-8142e2754bb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xqqwk\" (UID: \"7ea12c47-cdd1-4fa0-aea1-8142e2754bb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xqqwk" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229794 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6dxw\" (UniqueName: \"kubernetes.io/projected/4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0-kube-api-access-j6dxw\") pod \"service-ca-operator-777779d784-cm2xp\" (UID: \"4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cm2xp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229815 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c7c59585-55a6-4686-a998-058c2228f134-apiservice-cert\") pod \"packageserver-d55dfcdfc-2lb4b\" (UID: \"c7c59585-55a6-4686-a998-058c2228f134\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lb4b" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229850 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dca84748-635f-4929-9259-e64bc022a883-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fg9pb\" (UID: \"dca84748-635f-4929-9259-e64bc022a883\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg9pb" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229871 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c7c59585-55a6-4686-a998-058c2228f134-webhook-cert\") pod \"packageserver-d55dfcdfc-2lb4b\" (UID: \"c7c59585-55a6-4686-a998-058c2228f134\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lb4b" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229897 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0-serving-cert\") pod \"service-ca-operator-777779d784-cm2xp\" (UID: \"4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cm2xp" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229928 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhgxc\" (UniqueName: \"kubernetes.io/projected/e9e6e2cf-9009-4951-bd8d-7878af4bd041-kube-api-access-dhgxc\") pod \"downloads-7954f5f757-59w8t\" (UID: \"e9e6e2cf-9009-4951-bd8d-7878af4bd041\") " pod="openshift-console/downloads-7954f5f757-59w8t" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229945 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86z9g\" (UniqueName: \"kubernetes.io/projected/a5a8e507-be3d-4682-894b-249235ecc978-kube-api-access-86z9g\") pod \"cluster-samples-operator-665b6dd947-qfhzm\" (UID: \"a5a8e507-be3d-4682-894b-249235ecc978\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfhzm" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229961 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mp5x\" (UniqueName: \"kubernetes.io/projected/afc3bdeb-edeb-4acf-8e93-c72d471e5f49-kube-api-access-8mp5x\") pod \"package-server-manager-789f6589d5-vbhpf\" (UID: \"afc3bdeb-edeb-4acf-8e93-c72d471e5f49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vbhpf" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.229978 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a5a8e507-be3d-4682-894b-249235ecc978-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qfhzm\" (UID: \"a5a8e507-be3d-4682-894b-249235ecc978\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfhzm" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.230941 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c7c59585-55a6-4686-a998-058c2228f134-tmpfs\") pod \"packageserver-d55dfcdfc-2lb4b\" (UID: \"c7c59585-55a6-4686-a998-058c2228f134\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lb4b" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.231303 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cf79b3b7-5bd1-4877-b5ab-1141d969437c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zbp7g\" (UID: \"cf79b3b7-5bd1-4877-b5ab-1141d969437c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zbp7g" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.232025 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.232774 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dca84748-635f-4929-9259-e64bc022a883-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fg9pb\" (UID: \"dca84748-635f-4929-9259-e64bc022a883\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg9pb" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.241274 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dca84748-635f-4929-9259-e64bc022a883-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fg9pb\" (UID: \"dca84748-635f-4929-9259-e64bc022a883\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg9pb" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.251874 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.271887 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.291605 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.312364 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.331815 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.352088 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.372195 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.391941 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.413012 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.432422 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.452352 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.472164 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.492022 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.513359 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.532316 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.543054 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c7c59585-55a6-4686-a998-058c2228f134-apiservice-cert\") pod \"packageserver-d55dfcdfc-2lb4b\" (UID: \"c7c59585-55a6-4686-a998-058c2228f134\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lb4b" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.543090 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c7c59585-55a6-4686-a998-058c2228f134-webhook-cert\") pod \"packageserver-d55dfcdfc-2lb4b\" (UID: \"c7c59585-55a6-4686-a998-058c2228f134\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lb4b" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.552504 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.572196 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.597569 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.602258 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7ea12c47-cdd1-4fa0-aea1-8142e2754bb9-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xqqwk\" (UID: \"7ea12c47-cdd1-4fa0-aea1-8142e2754bb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xqqwk" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.613235 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.631827 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.644852 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a5a8e507-be3d-4682-894b-249235ecc978-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qfhzm\" (UID: \"a5a8e507-be3d-4682-894b-249235ecc978\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfhzm" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.652395 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.672829 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.692918 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.705209 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ea12c47-cdd1-4fa0-aea1-8142e2754bb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xqqwk\" (UID: \"7ea12c47-cdd1-4fa0-aea1-8142e2754bb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xqqwk" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.713397 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.732257 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.753284 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.772196 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.797059 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.812672 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.826154 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cf79b3b7-5bd1-4877-b5ab-1141d969437c-proxy-tls\") pod \"machine-config-controller-84d6567774-zbp7g\" (UID: \"cf79b3b7-5bd1-4877-b5ab-1141d969437c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zbp7g" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.833076 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.852232 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.871866 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.891844 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.912587 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.932886 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.954921 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.971539 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.992643 4955 request.go:700] Waited for 1.002841036s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-dockercfg-x57mr&limit=500&resourceVersion=0 Feb 02 13:04:59 crc kubenswrapper[4955]: I0202 13:04:59.994095 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.012144 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.032737 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.055069 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.072772 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.092260 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.112853 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.132755 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.153128 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.172270 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.193001 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.213034 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 13:05:00 crc kubenswrapper[4955]: E0202 13:05:00.231544 4955 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 02 13:05:00 crc kubenswrapper[4955]: E0202 13:05:00.231852 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e327a694-d78a-4a74-b353-dbcc4d4ce040-webhook-certs podName:e327a694-d78a-4a74-b353-dbcc4d4ce040 nodeName:}" failed. No retries permitted until 2026-02-02 13:05:00.731821498 +0000 UTC m=+151.644157988 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e327a694-d78a-4a74-b353-dbcc4d4ce040-webhook-certs") pod "multus-admission-controller-857f4d67dd-46rd2" (UID: "e327a694-d78a-4a74-b353-dbcc4d4ce040") : failed to sync secret cache: timed out waiting for the condition Feb 02 13:05:00 crc kubenswrapper[4955]: E0202 13:05:00.231612 4955 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 02 13:05:00 crc kubenswrapper[4955]: E0202 13:05:00.231692 4955 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 02 13:05:00 crc kubenswrapper[4955]: E0202 13:05:00.231692 4955 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 02 13:05:00 crc kubenswrapper[4955]: E0202 13:05:00.232249 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0-serving-cert podName:4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0 nodeName:}" failed. No retries permitted until 2026-02-02 13:05:00.732226287 +0000 UTC m=+151.644562777 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0-serving-cert") pod "service-ca-operator-777779d784-cm2xp" (UID: "4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0") : failed to sync secret cache: timed out waiting for the condition Feb 02 13:05:00 crc kubenswrapper[4955]: E0202 13:05:00.232893 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0-config podName:4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0 nodeName:}" failed. No retries permitted until 2026-02-02 13:05:00.732844651 +0000 UTC m=+151.645181151 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0-config") pod "service-ca-operator-777779d784-cm2xp" (UID: "4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0") : failed to sync configmap cache: timed out waiting for the condition Feb 02 13:05:00 crc kubenswrapper[4955]: E0202 13:05:00.232936 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afc3bdeb-edeb-4acf-8e93-c72d471e5f49-package-server-manager-serving-cert podName:afc3bdeb-edeb-4acf-8e93-c72d471e5f49 nodeName:}" failed. No retries permitted until 2026-02-02 13:05:00.732912423 +0000 UTC m=+151.645249003 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/afc3bdeb-edeb-4acf-8e93-c72d471e5f49-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-vbhpf" (UID: "afc3bdeb-edeb-4acf-8e93-c72d471e5f49") : failed to sync secret cache: timed out waiting for the condition Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.233469 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.253187 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.272105 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.293182 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.312599 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.333006 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.352881 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.372468 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.392885 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.412089 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.433167 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.452096 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.473623 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.492828 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.513258 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.533286 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.552292 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.572316 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.593095 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.622360 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.632489 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.652671 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.672361 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.692136 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.712991 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.732928 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.749869 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0-serving-cert\") pod \"service-ca-operator-777779d784-cm2xp\" (UID: \"4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cm2xp" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.750034 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/afc3bdeb-edeb-4acf-8e93-c72d471e5f49-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vbhpf\" (UID: \"afc3bdeb-edeb-4acf-8e93-c72d471e5f49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vbhpf" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.750076 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e327a694-d78a-4a74-b353-dbcc4d4ce040-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-46rd2\" (UID: \"e327a694-d78a-4a74-b353-dbcc4d4ce040\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-46rd2" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.750105 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0-config\") pod \"service-ca-operator-777779d784-cm2xp\" (UID: \"4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cm2xp" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.751021 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0-config\") pod \"service-ca-operator-777779d784-cm2xp\" (UID: \"4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cm2xp" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.752648 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.754886 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e327a694-d78a-4a74-b353-dbcc4d4ce040-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-46rd2\" (UID: \"e327a694-d78a-4a74-b353-dbcc4d4ce040\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-46rd2" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.759432 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0-serving-cert\") pod \"service-ca-operator-777779d784-cm2xp\" (UID: \"4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cm2xp" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.763145 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/afc3bdeb-edeb-4acf-8e93-c72d471e5f49-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vbhpf\" (UID: \"afc3bdeb-edeb-4acf-8e93-c72d471e5f49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vbhpf" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.774920 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.791986 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.812930 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.852491 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.873489 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.893500 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.913637 4955 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.932700 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.953638 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.988061 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r6pt\" (UniqueName: \"kubernetes.io/projected/a1923bcc-1d3a-4205-807e-fdf37f3b08ea-kube-api-access-5r6pt\") pod \"machine-api-operator-5694c8668f-tkkbx\" (UID: \"a1923bcc-1d3a-4205-807e-fdf37f3b08ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tkkbx" Feb 02 13:05:00 crc kubenswrapper[4955]: I0202 13:05:00.994775 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tkkbx" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.012749 4955 request.go:700] Waited for 1.875540519s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/serviceaccounts/console-operator/token Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.018875 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db4r7\" (UniqueName: \"kubernetes.io/projected/ea7da83a-3612-415d-9d5b-4684e1d38cde-kube-api-access-db4r7\") pod \"machine-approver-56656f9798-v9znw\" (UID: \"ea7da83a-3612-415d-9d5b-4684e1d38cde\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9znw" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.031887 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n7k5\" (UniqueName: \"kubernetes.io/projected/aa2e4282-fadd-4ef2-a933-ca151ce9acde-kube-api-access-2n7k5\") pod \"console-operator-58897d9998-q7gnb\" (UID: \"aa2e4282-fadd-4ef2-a933-ca151ce9acde\") " pod="openshift-console-operator/console-operator-58897d9998-q7gnb" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.049995 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7njn4\" (UniqueName: \"kubernetes.io/projected/586f9380-1574-4d6b-847d-d775fc1508b0-kube-api-access-7njn4\") pod \"controller-manager-879f6c89f-g2vv4\" (UID: \"586f9380-1574-4d6b-847d-d775fc1508b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.052140 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9znw" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.067379 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jxzv\" (UniqueName: \"kubernetes.io/projected/e88a945d-172b-40d3-938d-444a4d65bf11-kube-api-access-8jxzv\") pod \"apiserver-76f77b778f-g44jq\" (UID: \"e88a945d-172b-40d3-938d-444a4d65bf11\") " pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.086383 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc4pf\" (UniqueName: \"kubernetes.io/projected/e89c5a55-99d3-4005-b90c-71041477fb75-kube-api-access-zc4pf\") pod \"authentication-operator-69f744f599-mj5sp\" (UID: \"e89c5a55-99d3-4005-b90c-71041477fb75\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mj5sp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.110509 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxd6c\" (UniqueName: \"kubernetes.io/projected/38a70ee1-d1f0-4373-93b5-2132600f76b6-kube-api-access-sxd6c\") pod \"openshift-apiserver-operator-796bbdcf4f-mlcfp\" (UID: \"38a70ee1-d1f0-4373-93b5-2132600f76b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlcfp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.126767 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.127080 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6vzd\" (UniqueName: \"kubernetes.io/projected/135262fe-e63f-4d62-8260-4a90ee8c1f26-kube-api-access-g6vzd\") pod \"route-controller-manager-6576b87f9c-9fbgh\" (UID: \"135262fe-e63f-4d62-8260-4a90ee8c1f26\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.142176 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlcfp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.154203 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd6dh\" (UniqueName: \"kubernetes.io/projected/ddec21a9-43c9-4885-abde-9e65c9a8762d-kube-api-access-rd6dh\") pod \"oauth-openshift-558db77b4-dmwz5\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.171043 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bccvr\" (UniqueName: \"kubernetes.io/projected/de431f83-45ff-443c-b87c-d7ac12a3d71f-kube-api-access-bccvr\") pod \"apiserver-7bbb656c7d-ghqn5\" (UID: \"de431f83-45ff-443c-b87c-d7ac12a3d71f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.179206 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-q7gnb" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.181293 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tkkbx"] Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.186780 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkvnb\" (UniqueName: \"kubernetes.io/projected/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-kube-api-access-wkvnb\") pod \"console-f9d7485db-6tx72\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:05:01 crc kubenswrapper[4955]: W0202 13:05:01.194473 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1923bcc_1d3a_4205_807e_fdf37f3b08ea.slice/crio-6941c973afca163c4cea5e01c8b31af67fee0378c72166bc945f4a1536378dce WatchSource:0}: Error finding container 6941c973afca163c4cea5e01c8b31af67fee0378c72166bc945f4a1536378dce: Status 404 returned error can't find the container with id 6941c973afca163c4cea5e01c8b31af67fee0378c72166bc945f4a1536378dce Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.228043 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g89wx\" (UniqueName: \"kubernetes.io/projected/c7c59585-55a6-4686-a998-058c2228f134-kube-api-access-g89wx\") pod \"packageserver-d55dfcdfc-2lb4b\" (UID: \"c7c59585-55a6-4686-a998-058c2228f134\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lb4b" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.247844 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnhwr\" (UniqueName: \"kubernetes.io/projected/e327a694-d78a-4a74-b353-dbcc4d4ce040-kube-api-access-jnhwr\") pod \"multus-admission-controller-857f4d67dd-46rd2\" (UID: \"e327a694-d78a-4a74-b353-dbcc4d4ce040\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-46rd2" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.268870 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6dxw\" (UniqueName: \"kubernetes.io/projected/4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0-kube-api-access-j6dxw\") pod \"service-ca-operator-777779d784-cm2xp\" (UID: \"4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cm2xp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.288315 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76pns\" (UniqueName: \"kubernetes.io/projected/cf79b3b7-5bd1-4877-b5ab-1141d969437c-kube-api-access-76pns\") pod \"machine-config-controller-84d6567774-zbp7g\" (UID: \"cf79b3b7-5bd1-4877-b5ab-1141d969437c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zbp7g" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.305689 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kh5z\" (UniqueName: \"kubernetes.io/projected/7ea12c47-cdd1-4fa0-aea1-8142e2754bb9-kube-api-access-7kh5z\") pod \"cluster-image-registry-operator-dc59b4c8b-xqqwk\" (UID: \"7ea12c47-cdd1-4fa0-aea1-8142e2754bb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xqqwk" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.313956 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lb4b" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.317813 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.323080 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g2vv4"] Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.328885 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqpzs\" (UniqueName: \"kubernetes.io/projected/dca84748-635f-4929-9259-e64bc022a883-kube-api-access-rqpzs\") pod \"kube-storage-version-migrator-operator-b67b599dd-fg9pb\" (UID: \"dca84748-635f-4929-9259-e64bc022a883\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg9pb" Feb 02 13:05:01 crc kubenswrapper[4955]: W0202 13:05:01.333043 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod586f9380_1574_4d6b_847d_d775fc1508b0.slice/crio-fa8b5b2e7b7a25b83785a0de1fd0d721e867459a1baaddc180f7ec9f3051dc12 WatchSource:0}: Error finding container fa8b5b2e7b7a25b83785a0de1fd0d721e867459a1baaddc180f7ec9f3051dc12: Status 404 returned error can't find the container with id fa8b5b2e7b7a25b83785a0de1fd0d721e867459a1baaddc180f7ec9f3051dc12 Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.341192 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zbp7g" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.350682 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhgxc\" (UniqueName: \"kubernetes.io/projected/e9e6e2cf-9009-4951-bd8d-7878af4bd041-kube-api-access-dhgxc\") pod \"downloads-7954f5f757-59w8t\" (UID: \"e9e6e2cf-9009-4951-bd8d-7878af4bd041\") " pod="openshift-console/downloads-7954f5f757-59w8t" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.377672 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlcfp"] Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.378546 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86z9g\" (UniqueName: \"kubernetes.io/projected/a5a8e507-be3d-4682-894b-249235ecc978-kube-api-access-86z9g\") pod \"cluster-samples-operator-665b6dd947-qfhzm\" (UID: \"a5a8e507-be3d-4682-894b-249235ecc978\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfhzm" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.378984 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mj5sp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.395209 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7ea12c47-cdd1-4fa0-aea1-8142e2754bb9-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xqqwk\" (UID: \"7ea12c47-cdd1-4fa0-aea1-8142e2754bb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xqqwk" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.395257 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cm2xp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.399422 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.410589 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-q7gnb"] Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.415450 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mp5x\" (UniqueName: \"kubernetes.io/projected/afc3bdeb-edeb-4acf-8e93-c72d471e5f49-kube-api-access-8mp5x\") pod \"package-server-manager-789f6589d5-vbhpf\" (UID: \"afc3bdeb-edeb-4acf-8e93-c72d471e5f49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vbhpf" Feb 02 13:05:01 crc kubenswrapper[4955]: W0202 13:05:01.419002 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38a70ee1_d1f0_4373_93b5_2132600f76b6.slice/crio-219f9256d7f733908249485b0980d77af815a11d946a2721ab3bf5af925b2100 WatchSource:0}: Error finding container 219f9256d7f733908249485b0980d77af815a11d946a2721ab3bf5af925b2100: Status 404 returned error can't find the container with id 219f9256d7f733908249485b0980d77af815a11d946a2721ab3bf5af925b2100 Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.449423 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-46rd2" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.449529 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.455543 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.464985 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlx6s\" (UniqueName: \"kubernetes.io/projected/b9332760-2eb1-47c0-b93a-97168fc74379-kube-api-access-dlx6s\") pod \"olm-operator-6b444d44fb-ncj99\" (UID: \"b9332760-2eb1-47c0-b93a-97168fc74379\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ncj99" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465014 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krmhv\" (UniqueName: \"kubernetes.io/projected/1597ed2b-7fa1-4e63-8826-6e5e3ee7d116-kube-api-access-krmhv\") pod \"openshift-config-operator-7777fb866f-psbd9\" (UID: \"1597ed2b-7fa1-4e63-8826-6e5e3ee7d116\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-psbd9" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465053 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8a8d330-b21f-4f25-b971-b880b6adee0c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2lgxn\" (UID: \"e8a8d330-b21f-4f25-b971-b880b6adee0c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2lgxn" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465071 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cf0c869-fa36-4262-ae8d-aae0bf0f5f00-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ptgb8\" (UID: \"9cf0c869-fa36-4262-ae8d-aae0bf0f5f00\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ptgb8" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465088 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgvft\" (UniqueName: \"kubernetes.io/projected/07fab9a9-1e1c-442c-88e5-b5add57beff5-kube-api-access-lgvft\") pod \"ingress-operator-5b745b69d9-m8vvr\" (UID: \"07fab9a9-1e1c-442c-88e5-b5add57beff5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m8vvr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465125 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d19da25f-25c6-4654-86a1-f681e982e738-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465143 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9798e02a-e958-4d10-8b6b-a9eb99ae2600-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-s5gxr\" (UID: \"9798e02a-e958-4d10-8b6b-a9eb99ae2600\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5gxr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465167 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcwfm\" (UniqueName: \"kubernetes.io/projected/e17bf741-cd77-4d87-aea5-663e5d2ba319-kube-api-access-pcwfm\") pod \"marketplace-operator-79b997595-74zm4\" (UID: \"e17bf741-cd77-4d87-aea5-663e5d2ba319\") " pod="openshift-marketplace/marketplace-operator-79b997595-74zm4" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465181 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb-secret-volume\") pod \"collect-profiles-29500620-mzf7f\" (UID: \"46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-mzf7f" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465198 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/07fab9a9-1e1c-442c-88e5-b5add57beff5-metrics-tls\") pod \"ingress-operator-5b745b69d9-m8vvr\" (UID: \"07fab9a9-1e1c-442c-88e5-b5add57beff5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m8vvr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465230 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8d741e9f-0098-4689-8545-3bafe559a9c3-signing-key\") pod \"service-ca-9c57cc56f-mrjsw\" (UID: \"8d741e9f-0098-4689-8545-3bafe559a9c3\") " pod="openshift-service-ca/service-ca-9c57cc56f-mrjsw" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465255 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/95f0540a-1739-40df-9242-c9c2e6ccac7f-default-certificate\") pod \"router-default-5444994796-bjmqj\" (UID: \"95f0540a-1739-40df-9242-c9c2e6ccac7f\") " pod="openshift-ingress/router-default-5444994796-bjmqj" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465272 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cf0c869-fa36-4262-ae8d-aae0bf0f5f00-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ptgb8\" (UID: \"9cf0c869-fa36-4262-ae8d-aae0bf0f5f00\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ptgb8" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465307 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f962e859-9339-4e87-ab2f-b4b107cf6529-certs\") pod \"machine-config-server-kpfmp\" (UID: \"f962e859-9339-4e87-ab2f-b4b107cf6529\") " pod="openshift-machine-config-operator/machine-config-server-kpfmp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465367 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ca77039-1020-4afc-bc21-3b2aeded6728-profile-collector-cert\") pod \"catalog-operator-68c6474976-9slxp\" (UID: \"8ca77039-1020-4afc-bc21-3b2aeded6728\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9slxp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465416 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465435 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2367ef70-be73-4dbb-ac4e-f1ce9711351d-etcd-client\") pod \"etcd-operator-b45778765-mbssd\" (UID: \"2367ef70-be73-4dbb-ac4e-f1ce9711351d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mbssd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465450 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2367ef70-be73-4dbb-ac4e-f1ce9711351d-etcd-ca\") pod \"etcd-operator-b45778765-mbssd\" (UID: \"2367ef70-be73-4dbb-ac4e-f1ce9711351d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mbssd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465467 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/95f0540a-1739-40df-9242-c9c2e6ccac7f-stats-auth\") pod \"router-default-5444994796-bjmqj\" (UID: \"95f0540a-1739-40df-9242-c9c2e6ccac7f\") " pod="openshift-ingress/router-default-5444994796-bjmqj" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465484 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd56g\" (UniqueName: \"kubernetes.io/projected/2367ef70-be73-4dbb-ac4e-f1ce9711351d-kube-api-access-sd56g\") pod \"etcd-operator-b45778765-mbssd\" (UID: \"2367ef70-be73-4dbb-ac4e-f1ce9711351d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mbssd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465503 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1597ed2b-7fa1-4e63-8826-6e5e3ee7d116-serving-cert\") pod \"openshift-config-operator-7777fb866f-psbd9\" (UID: \"1597ed2b-7fa1-4e63-8826-6e5e3ee7d116\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-psbd9" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465530 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8d741e9f-0098-4689-8545-3bafe559a9c3-signing-cabundle\") pod \"service-ca-9c57cc56f-mrjsw\" (UID: \"8d741e9f-0098-4689-8545-3bafe559a9c3\") " pod="openshift-service-ca/service-ca-9c57cc56f-mrjsw" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465595 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23cd2aa2-9b0c-4cab-9252-c8cd4e749a44-cert\") pod \"ingress-canary-q54rv\" (UID: \"23cd2aa2-9b0c-4cab-9252-c8cd4e749a44\") " pod="openshift-ingress-canary/ingress-canary-q54rv" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465612 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz954\" (UniqueName: \"kubernetes.io/projected/8ca77039-1020-4afc-bc21-3b2aeded6728-kube-api-access-bz954\") pod \"catalog-operator-68c6474976-9slxp\" (UID: \"8ca77039-1020-4afc-bc21-3b2aeded6728\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9slxp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465627 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2367ef70-be73-4dbb-ac4e-f1ce9711351d-etcd-service-ca\") pod \"etcd-operator-b45778765-mbssd\" (UID: \"2367ef70-be73-4dbb-ac4e-f1ce9711351d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mbssd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465651 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/95f0540a-1739-40df-9242-c9c2e6ccac7f-metrics-certs\") pod \"router-default-5444994796-bjmqj\" (UID: \"95f0540a-1739-40df-9242-c9c2e6ccac7f\") " pod="openshift-ingress/router-default-5444994796-bjmqj" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465684 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95f0540a-1739-40df-9242-c9c2e6ccac7f-service-ca-bundle\") pod \"router-default-5444994796-bjmqj\" (UID: \"95f0540a-1739-40df-9242-c9c2e6ccac7f\") " pod="openshift-ingress/router-default-5444994796-bjmqj" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465707 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e8a8d330-b21f-4f25-b971-b880b6adee0c-proxy-tls\") pod \"machine-config-operator-74547568cd-2lgxn\" (UID: \"e8a8d330-b21f-4f25-b971-b880b6adee0c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2lgxn" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465739 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d19da25f-25c6-4654-86a1-f681e982e738-trusted-ca\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465755 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e17bf741-cd77-4d87-aea5-663e5d2ba319-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-74zm4\" (UID: \"e17bf741-cd77-4d87-aea5-663e5d2ba319\") " pod="openshift-marketplace/marketplace-operator-79b997595-74zm4" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465769 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2367ef70-be73-4dbb-ac4e-f1ce9711351d-config\") pod \"etcd-operator-b45778765-mbssd\" (UID: \"2367ef70-be73-4dbb-ac4e-f1ce9711351d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mbssd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465814 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0d48541-fa1c-496e-9d39-c5e75faa8d55-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vfbjc\" (UID: \"a0d48541-fa1c-496e-9d39-c5e75faa8d55\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vfbjc" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465831 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b9332760-2eb1-47c0-b93a-97168fc74379-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ncj99\" (UID: \"b9332760-2eb1-47c0-b93a-97168fc74379\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ncj99" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465864 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96q9r\" (UniqueName: \"kubernetes.io/projected/d19da25f-25c6-4654-86a1-f681e982e738-kube-api-access-96q9r\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465886 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e17bf741-cd77-4d87-aea5-663e5d2ba319-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-74zm4\" (UID: \"e17bf741-cd77-4d87-aea5-663e5d2ba319\") " pod="openshift-marketplace/marketplace-operator-79b997595-74zm4" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465917 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2367ef70-be73-4dbb-ac4e-f1ce9711351d-serving-cert\") pod \"etcd-operator-b45778765-mbssd\" (UID: \"2367ef70-be73-4dbb-ac4e-f1ce9711351d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mbssd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465950 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d19da25f-25c6-4654-86a1-f681e982e738-registry-certificates\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465968 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b9332760-2eb1-47c0-b93a-97168fc74379-srv-cert\") pod \"olm-operator-6b444d44fb-ncj99\" (UID: \"b9332760-2eb1-47c0-b93a-97168fc74379\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ncj99" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.465991 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bccps\" (UniqueName: \"kubernetes.io/projected/e8a8d330-b21f-4f25-b971-b880b6adee0c-kube-api-access-bccps\") pod \"machine-config-operator-74547568cd-2lgxn\" (UID: \"e8a8d330-b21f-4f25-b971-b880b6adee0c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2lgxn" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.466010 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c16acdaf-3bd0-4c0d-94a0-49da6c643bf8-metrics-tls\") pod \"dns-operator-744455d44c-9vptd\" (UID: \"c16acdaf-3bd0-4c0d-94a0-49da6c643bf8\") " pod="openshift-dns-operator/dns-operator-744455d44c-9vptd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.466029 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88rx2\" (UniqueName: \"kubernetes.io/projected/23cd2aa2-9b0c-4cab-9252-c8cd4e749a44-kube-api-access-88rx2\") pod \"ingress-canary-q54rv\" (UID: \"23cd2aa2-9b0c-4cab-9252-c8cd4e749a44\") " pod="openshift-ingress-canary/ingress-canary-q54rv" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.466044 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ca77039-1020-4afc-bc21-3b2aeded6728-srv-cert\") pod \"catalog-operator-68c6474976-9slxp\" (UID: \"8ca77039-1020-4afc-bc21-3b2aeded6728\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9slxp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.466075 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cf0c869-fa36-4262-ae8d-aae0bf0f5f00-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ptgb8\" (UID: \"9cf0c869-fa36-4262-ae8d-aae0bf0f5f00\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ptgb8" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.466096 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d19da25f-25c6-4654-86a1-f681e982e738-registry-tls\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.466113 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0d48541-fa1c-496e-9d39-c5e75faa8d55-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vfbjc\" (UID: \"a0d48541-fa1c-496e-9d39-c5e75faa8d55\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vfbjc" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.466129 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07fab9a9-1e1c-442c-88e5-b5add57beff5-trusted-ca\") pod \"ingress-operator-5b745b69d9-m8vvr\" (UID: \"07fab9a9-1e1c-442c-88e5-b5add57beff5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m8vvr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.466146 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/07fab9a9-1e1c-442c-88e5-b5add57beff5-bound-sa-token\") pod \"ingress-operator-5b745b69d9-m8vvr\" (UID: \"07fab9a9-1e1c-442c-88e5-b5add57beff5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m8vvr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.466206 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l62lt\" (UniqueName: \"kubernetes.io/projected/c16acdaf-3bd0-4c0d-94a0-49da6c643bf8-kube-api-access-l62lt\") pod \"dns-operator-744455d44c-9vptd\" (UID: \"c16acdaf-3bd0-4c0d-94a0-49da6c643bf8\") " pod="openshift-dns-operator/dns-operator-744455d44c-9vptd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.466259 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1597ed2b-7fa1-4e63-8826-6e5e3ee7d116-available-featuregates\") pod \"openshift-config-operator-7777fb866f-psbd9\" (UID: \"1597ed2b-7fa1-4e63-8826-6e5e3ee7d116\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-psbd9" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.466283 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/949e1113-3dd6-425e-9425-4208ecd1a30f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lpmfq\" (UID: \"949e1113-3dd6-425e-9425-4208ecd1a30f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpmfq" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.466325 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwsnn\" (UniqueName: \"kubernetes.io/projected/9798e02a-e958-4d10-8b6b-a9eb99ae2600-kube-api-access-rwsnn\") pod \"openshift-controller-manager-operator-756b6f6bc6-s5gxr\" (UID: \"9798e02a-e958-4d10-8b6b-a9eb99ae2600\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5gxr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.466348 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e8a8d330-b21f-4f25-b971-b880b6adee0c-images\") pod \"machine-config-operator-74547568cd-2lgxn\" (UID: \"e8a8d330-b21f-4f25-b971-b880b6adee0c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2lgxn" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.466384 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpnww\" (UniqueName: \"kubernetes.io/projected/f962e859-9339-4e87-ab2f-b4b107cf6529-kube-api-access-dpnww\") pod \"machine-config-server-kpfmp\" (UID: \"f962e859-9339-4e87-ab2f-b4b107cf6529\") " pod="openshift-machine-config-operator/machine-config-server-kpfmp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.466409 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tfwf\" (UniqueName: \"kubernetes.io/projected/8458e825-0591-415f-960d-06357c721b4c-kube-api-access-6tfwf\") pod \"control-plane-machine-set-operator-78cbb6b69f-xh9w4\" (UID: \"8458e825-0591-415f-960d-06357c721b4c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xh9w4" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.466430 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9798e02a-e958-4d10-8b6b-a9eb99ae2600-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-s5gxr\" (UID: \"9798e02a-e958-4d10-8b6b-a9eb99ae2600\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5gxr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.466463 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7mg8\" (UniqueName: \"kubernetes.io/projected/32e7b84e-a5b9-476f-a3f0-a4227db1d8e8-kube-api-access-x7mg8\") pod \"migrator-59844c95c7-j2pcs\" (UID: \"32e7b84e-a5b9-476f-a3f0-a4227db1d8e8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pcs" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.466483 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/949e1113-3dd6-425e-9425-4208ecd1a30f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lpmfq\" (UID: \"949e1113-3dd6-425e-9425-4208ecd1a30f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpmfq" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.466529 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6shmp\" (UniqueName: \"kubernetes.io/projected/95f0540a-1739-40df-9242-c9c2e6ccac7f-kube-api-access-6shmp\") pod \"router-default-5444994796-bjmqj\" (UID: \"95f0540a-1739-40df-9242-c9c2e6ccac7f\") " pod="openshift-ingress/router-default-5444994796-bjmqj" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.467724 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d19da25f-25c6-4654-86a1-f681e982e738-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.467753 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0d48541-fa1c-496e-9d39-c5e75faa8d55-config\") pod \"kube-controller-manager-operator-78b949d7b-vfbjc\" (UID: \"a0d48541-fa1c-496e-9d39-c5e75faa8d55\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vfbjc" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.467773 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwtmb\" (UniqueName: \"kubernetes.io/projected/8d741e9f-0098-4689-8545-3bafe559a9c3-kube-api-access-dwtmb\") pod \"service-ca-9c57cc56f-mrjsw\" (UID: \"8d741e9f-0098-4689-8545-3bafe559a9c3\") " pod="openshift-service-ca/service-ca-9c57cc56f-mrjsw" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.467790 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb-config-volume\") pod \"collect-profiles-29500620-mzf7f\" (UID: \"46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-mzf7f" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.467806 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q87q\" (UniqueName: \"kubernetes.io/projected/46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb-kube-api-access-5q87q\") pod \"collect-profiles-29500620-mzf7f\" (UID: \"46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-mzf7f" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.467866 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d19da25f-25c6-4654-86a1-f681e982e738-bound-sa-token\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.467887 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/949e1113-3dd6-425e-9425-4208ecd1a30f-config\") pod \"kube-apiserver-operator-766d6c64bb-lpmfq\" (UID: \"949e1113-3dd6-425e-9425-4208ecd1a30f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpmfq" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.467904 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8458e825-0591-415f-960d-06357c721b4c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xh9w4\" (UID: \"8458e825-0591-415f-960d-06357c721b4c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xh9w4" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.467935 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f962e859-9339-4e87-ab2f-b4b107cf6529-node-bootstrap-token\") pod \"machine-config-server-kpfmp\" (UID: \"f962e859-9339-4e87-ab2f-b4b107cf6529\") " pod="openshift-machine-config-operator/machine-config-server-kpfmp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.473215 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:05:01 crc kubenswrapper[4955]: E0202 13:05:01.475523 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:01.975507973 +0000 UTC m=+152.887844423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.498714 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-59w8t" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.569097 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.569580 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tfwf\" (UniqueName: \"kubernetes.io/projected/8458e825-0591-415f-960d-06357c721b4c-kube-api-access-6tfwf\") pod \"control-plane-machine-set-operator-78cbb6b69f-xh9w4\" (UID: \"8458e825-0591-415f-960d-06357c721b4c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xh9w4" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.569614 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9798e02a-e958-4d10-8b6b-a9eb99ae2600-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-s5gxr\" (UID: \"9798e02a-e958-4d10-8b6b-a9eb99ae2600\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5gxr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.569635 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/949e1113-3dd6-425e-9425-4208ecd1a30f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lpmfq\" (UID: \"949e1113-3dd6-425e-9425-4208ecd1a30f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpmfq" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.569650 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7mg8\" (UniqueName: \"kubernetes.io/projected/32e7b84e-a5b9-476f-a3f0-a4227db1d8e8-kube-api-access-x7mg8\") pod \"migrator-59844c95c7-j2pcs\" (UID: \"32e7b84e-a5b9-476f-a3f0-a4227db1d8e8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pcs" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.569664 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6shmp\" (UniqueName: \"kubernetes.io/projected/95f0540a-1739-40df-9242-c9c2e6ccac7f-kube-api-access-6shmp\") pod \"router-default-5444994796-bjmqj\" (UID: \"95f0540a-1739-40df-9242-c9c2e6ccac7f\") " pod="openshift-ingress/router-default-5444994796-bjmqj" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.569681 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0d48541-fa1c-496e-9d39-c5e75faa8d55-config\") pod \"kube-controller-manager-operator-78b949d7b-vfbjc\" (UID: \"a0d48541-fa1c-496e-9d39-c5e75faa8d55\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vfbjc" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.569729 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwtmb\" (UniqueName: \"kubernetes.io/projected/8d741e9f-0098-4689-8545-3bafe559a9c3-kube-api-access-dwtmb\") pod \"service-ca-9c57cc56f-mrjsw\" (UID: \"8d741e9f-0098-4689-8545-3bafe559a9c3\") " pod="openshift-service-ca/service-ca-9c57cc56f-mrjsw" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.569747 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb-config-volume\") pod \"collect-profiles-29500620-mzf7f\" (UID: \"46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-mzf7f" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.569770 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d19da25f-25c6-4654-86a1-f681e982e738-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.569784 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q87q\" (UniqueName: \"kubernetes.io/projected/46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb-kube-api-access-5q87q\") pod \"collect-profiles-29500620-mzf7f\" (UID: \"46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-mzf7f" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.569811 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d19da25f-25c6-4654-86a1-f681e982e738-bound-sa-token\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.569825 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/949e1113-3dd6-425e-9425-4208ecd1a30f-config\") pod \"kube-apiserver-operator-766d6c64bb-lpmfq\" (UID: \"949e1113-3dd6-425e-9425-4208ecd1a30f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpmfq" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.569849 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8458e825-0591-415f-960d-06357c721b4c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xh9w4\" (UID: \"8458e825-0591-415f-960d-06357c721b4c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xh9w4" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.569866 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f962e859-9339-4e87-ab2f-b4b107cf6529-node-bootstrap-token\") pod \"machine-config-server-kpfmp\" (UID: \"f962e859-9339-4e87-ab2f-b4b107cf6529\") " pod="openshift-machine-config-operator/machine-config-server-kpfmp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.569885 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlx6s\" (UniqueName: \"kubernetes.io/projected/b9332760-2eb1-47c0-b93a-97168fc74379-kube-api-access-dlx6s\") pod \"olm-operator-6b444d44fb-ncj99\" (UID: \"b9332760-2eb1-47c0-b93a-97168fc74379\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ncj99" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.569909 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krmhv\" (UniqueName: \"kubernetes.io/projected/1597ed2b-7fa1-4e63-8826-6e5e3ee7d116-kube-api-access-krmhv\") pod \"openshift-config-operator-7777fb866f-psbd9\" (UID: \"1597ed2b-7fa1-4e63-8826-6e5e3ee7d116\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-psbd9" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.569925 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8a8d330-b21f-4f25-b971-b880b6adee0c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2lgxn\" (UID: \"e8a8d330-b21f-4f25-b971-b880b6adee0c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2lgxn" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.569940 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cf0c869-fa36-4262-ae8d-aae0bf0f5f00-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ptgb8\" (UID: \"9cf0c869-fa36-4262-ae8d-aae0bf0f5f00\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ptgb8" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.569954 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgvft\" (UniqueName: \"kubernetes.io/projected/07fab9a9-1e1c-442c-88e5-b5add57beff5-kube-api-access-lgvft\") pod \"ingress-operator-5b745b69d9-m8vvr\" (UID: \"07fab9a9-1e1c-442c-88e5-b5add57beff5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m8vvr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.569989 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d19da25f-25c6-4654-86a1-f681e982e738-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570004 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9798e02a-e958-4d10-8b6b-a9eb99ae2600-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-s5gxr\" (UID: \"9798e02a-e958-4d10-8b6b-a9eb99ae2600\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5gxr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570024 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e8b7ba94-1502-4aa8-aa07-daab4d369add-csi-data-dir\") pod \"csi-hostpathplugin-9mrfh\" (UID: \"e8b7ba94-1502-4aa8-aa07-daab4d369add\") " pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570040 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcwfm\" (UniqueName: \"kubernetes.io/projected/e17bf741-cd77-4d87-aea5-663e5d2ba319-kube-api-access-pcwfm\") pod \"marketplace-operator-79b997595-74zm4\" (UID: \"e17bf741-cd77-4d87-aea5-663e5d2ba319\") " pod="openshift-marketplace/marketplace-operator-79b997595-74zm4" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570056 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb-secret-volume\") pod \"collect-profiles-29500620-mzf7f\" (UID: \"46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-mzf7f" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570081 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/07fab9a9-1e1c-442c-88e5-b5add57beff5-metrics-tls\") pod \"ingress-operator-5b745b69d9-m8vvr\" (UID: \"07fab9a9-1e1c-442c-88e5-b5add57beff5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m8vvr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570114 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8d741e9f-0098-4689-8545-3bafe559a9c3-signing-key\") pod \"service-ca-9c57cc56f-mrjsw\" (UID: \"8d741e9f-0098-4689-8545-3bafe559a9c3\") " pod="openshift-service-ca/service-ca-9c57cc56f-mrjsw" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570129 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/95f0540a-1739-40df-9242-c9c2e6ccac7f-default-certificate\") pod \"router-default-5444994796-bjmqj\" (UID: \"95f0540a-1739-40df-9242-c9c2e6ccac7f\") " pod="openshift-ingress/router-default-5444994796-bjmqj" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570164 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cf0c869-fa36-4262-ae8d-aae0bf0f5f00-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ptgb8\" (UID: \"9cf0c869-fa36-4262-ae8d-aae0bf0f5f00\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ptgb8" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570181 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e8b7ba94-1502-4aa8-aa07-daab4d369add-registration-dir\") pod \"csi-hostpathplugin-9mrfh\" (UID: \"e8b7ba94-1502-4aa8-aa07-daab4d369add\") " pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570195 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f962e859-9339-4e87-ab2f-b4b107cf6529-certs\") pod \"machine-config-server-kpfmp\" (UID: \"f962e859-9339-4e87-ab2f-b4b107cf6529\") " pod="openshift-machine-config-operator/machine-config-server-kpfmp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570220 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ldcl\" (UniqueName: \"kubernetes.io/projected/987c78e8-da7f-41e7-be71-1eccab1829b6-kube-api-access-7ldcl\") pod \"dns-default-ddl6c\" (UID: \"987c78e8-da7f-41e7-be71-1eccab1829b6\") " pod="openshift-dns/dns-default-ddl6c" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570239 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ca77039-1020-4afc-bc21-3b2aeded6728-profile-collector-cert\") pod \"catalog-operator-68c6474976-9slxp\" (UID: \"8ca77039-1020-4afc-bc21-3b2aeded6728\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9slxp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570255 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/987c78e8-da7f-41e7-be71-1eccab1829b6-config-volume\") pod \"dns-default-ddl6c\" (UID: \"987c78e8-da7f-41e7-be71-1eccab1829b6\") " pod="openshift-dns/dns-default-ddl6c" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570288 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2367ef70-be73-4dbb-ac4e-f1ce9711351d-etcd-client\") pod \"etcd-operator-b45778765-mbssd\" (UID: \"2367ef70-be73-4dbb-ac4e-f1ce9711351d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mbssd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570303 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2367ef70-be73-4dbb-ac4e-f1ce9711351d-etcd-ca\") pod \"etcd-operator-b45778765-mbssd\" (UID: \"2367ef70-be73-4dbb-ac4e-f1ce9711351d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mbssd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570319 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/95f0540a-1739-40df-9242-c9c2e6ccac7f-stats-auth\") pod \"router-default-5444994796-bjmqj\" (UID: \"95f0540a-1739-40df-9242-c9c2e6ccac7f\") " pod="openshift-ingress/router-default-5444994796-bjmqj" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570333 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd56g\" (UniqueName: \"kubernetes.io/projected/2367ef70-be73-4dbb-ac4e-f1ce9711351d-kube-api-access-sd56g\") pod \"etcd-operator-b45778765-mbssd\" (UID: \"2367ef70-be73-4dbb-ac4e-f1ce9711351d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mbssd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570364 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1597ed2b-7fa1-4e63-8826-6e5e3ee7d116-serving-cert\") pod \"openshift-config-operator-7777fb866f-psbd9\" (UID: \"1597ed2b-7fa1-4e63-8826-6e5e3ee7d116\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-psbd9" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570378 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e8b7ba94-1502-4aa8-aa07-daab4d369add-socket-dir\") pod \"csi-hostpathplugin-9mrfh\" (UID: \"e8b7ba94-1502-4aa8-aa07-daab4d369add\") " pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570394 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8d741e9f-0098-4689-8545-3bafe559a9c3-signing-cabundle\") pod \"service-ca-9c57cc56f-mrjsw\" (UID: \"8d741e9f-0098-4689-8545-3bafe559a9c3\") " pod="openshift-service-ca/service-ca-9c57cc56f-mrjsw" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570411 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23cd2aa2-9b0c-4cab-9252-c8cd4e749a44-cert\") pod \"ingress-canary-q54rv\" (UID: \"23cd2aa2-9b0c-4cab-9252-c8cd4e749a44\") " pod="openshift-ingress-canary/ingress-canary-q54rv" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570412 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0d48541-fa1c-496e-9d39-c5e75faa8d55-config\") pod \"kube-controller-manager-operator-78b949d7b-vfbjc\" (UID: \"a0d48541-fa1c-496e-9d39-c5e75faa8d55\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vfbjc" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570426 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz954\" (UniqueName: \"kubernetes.io/projected/8ca77039-1020-4afc-bc21-3b2aeded6728-kube-api-access-bz954\") pod \"catalog-operator-68c6474976-9slxp\" (UID: \"8ca77039-1020-4afc-bc21-3b2aeded6728\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9slxp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570465 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2367ef70-be73-4dbb-ac4e-f1ce9711351d-etcd-service-ca\") pod \"etcd-operator-b45778765-mbssd\" (UID: \"2367ef70-be73-4dbb-ac4e-f1ce9711351d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mbssd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570482 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/95f0540a-1739-40df-9242-c9c2e6ccac7f-metrics-certs\") pod \"router-default-5444994796-bjmqj\" (UID: \"95f0540a-1739-40df-9242-c9c2e6ccac7f\") " pod="openshift-ingress/router-default-5444994796-bjmqj" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570500 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klf82\" (UniqueName: \"kubernetes.io/projected/e8b7ba94-1502-4aa8-aa07-daab4d369add-kube-api-access-klf82\") pod \"csi-hostpathplugin-9mrfh\" (UID: \"e8b7ba94-1502-4aa8-aa07-daab4d369add\") " pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570518 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95f0540a-1739-40df-9242-c9c2e6ccac7f-service-ca-bundle\") pod \"router-default-5444994796-bjmqj\" (UID: \"95f0540a-1739-40df-9242-c9c2e6ccac7f\") " pod="openshift-ingress/router-default-5444994796-bjmqj" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570533 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e8a8d330-b21f-4f25-b971-b880b6adee0c-proxy-tls\") pod \"machine-config-operator-74547568cd-2lgxn\" (UID: \"e8a8d330-b21f-4f25-b971-b880b6adee0c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2lgxn" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570548 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2367ef70-be73-4dbb-ac4e-f1ce9711351d-config\") pod \"etcd-operator-b45778765-mbssd\" (UID: \"2367ef70-be73-4dbb-ac4e-f1ce9711351d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mbssd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570591 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e8b7ba94-1502-4aa8-aa07-daab4d369add-mountpoint-dir\") pod \"csi-hostpathplugin-9mrfh\" (UID: \"e8b7ba94-1502-4aa8-aa07-daab4d369add\") " pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570607 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d19da25f-25c6-4654-86a1-f681e982e738-trusted-ca\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570622 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e17bf741-cd77-4d87-aea5-663e5d2ba319-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-74zm4\" (UID: \"e17bf741-cd77-4d87-aea5-663e5d2ba319\") " pod="openshift-marketplace/marketplace-operator-79b997595-74zm4" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570636 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b9332760-2eb1-47c0-b93a-97168fc74379-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ncj99\" (UID: \"b9332760-2eb1-47c0-b93a-97168fc74379\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ncj99" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570651 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0d48541-fa1c-496e-9d39-c5e75faa8d55-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vfbjc\" (UID: \"a0d48541-fa1c-496e-9d39-c5e75faa8d55\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vfbjc" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570684 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96q9r\" (UniqueName: \"kubernetes.io/projected/d19da25f-25c6-4654-86a1-f681e982e738-kube-api-access-96q9r\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570709 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e17bf741-cd77-4d87-aea5-663e5d2ba319-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-74zm4\" (UID: \"e17bf741-cd77-4d87-aea5-663e5d2ba319\") " pod="openshift-marketplace/marketplace-operator-79b997595-74zm4" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570741 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2367ef70-be73-4dbb-ac4e-f1ce9711351d-serving-cert\") pod \"etcd-operator-b45778765-mbssd\" (UID: \"2367ef70-be73-4dbb-ac4e-f1ce9711351d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mbssd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570758 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/987c78e8-da7f-41e7-be71-1eccab1829b6-metrics-tls\") pod \"dns-default-ddl6c\" (UID: \"987c78e8-da7f-41e7-be71-1eccab1829b6\") " pod="openshift-dns/dns-default-ddl6c" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570773 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d19da25f-25c6-4654-86a1-f681e982e738-registry-certificates\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570787 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b9332760-2eb1-47c0-b93a-97168fc74379-srv-cert\") pod \"olm-operator-6b444d44fb-ncj99\" (UID: \"b9332760-2eb1-47c0-b93a-97168fc74379\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ncj99" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570805 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bccps\" (UniqueName: \"kubernetes.io/projected/e8a8d330-b21f-4f25-b971-b880b6adee0c-kube-api-access-bccps\") pod \"machine-config-operator-74547568cd-2lgxn\" (UID: \"e8a8d330-b21f-4f25-b971-b880b6adee0c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2lgxn" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570829 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c16acdaf-3bd0-4c0d-94a0-49da6c643bf8-metrics-tls\") pod \"dns-operator-744455d44c-9vptd\" (UID: \"c16acdaf-3bd0-4c0d-94a0-49da6c643bf8\") " pod="openshift-dns-operator/dns-operator-744455d44c-9vptd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570842 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ca77039-1020-4afc-bc21-3b2aeded6728-srv-cert\") pod \"catalog-operator-68c6474976-9slxp\" (UID: \"8ca77039-1020-4afc-bc21-3b2aeded6728\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9slxp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570875 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88rx2\" (UniqueName: \"kubernetes.io/projected/23cd2aa2-9b0c-4cab-9252-c8cd4e749a44-kube-api-access-88rx2\") pod \"ingress-canary-q54rv\" (UID: \"23cd2aa2-9b0c-4cab-9252-c8cd4e749a44\") " pod="openshift-ingress-canary/ingress-canary-q54rv" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570892 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cf0c869-fa36-4262-ae8d-aae0bf0f5f00-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ptgb8\" (UID: \"9cf0c869-fa36-4262-ae8d-aae0bf0f5f00\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ptgb8" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570910 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d19da25f-25c6-4654-86a1-f681e982e738-registry-tls\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570931 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0d48541-fa1c-496e-9d39-c5e75faa8d55-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vfbjc\" (UID: \"a0d48541-fa1c-496e-9d39-c5e75faa8d55\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vfbjc" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570946 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07fab9a9-1e1c-442c-88e5-b5add57beff5-trusted-ca\") pod \"ingress-operator-5b745b69d9-m8vvr\" (UID: \"07fab9a9-1e1c-442c-88e5-b5add57beff5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m8vvr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570960 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/07fab9a9-1e1c-442c-88e5-b5add57beff5-bound-sa-token\") pod \"ingress-operator-5b745b69d9-m8vvr\" (UID: \"07fab9a9-1e1c-442c-88e5-b5add57beff5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m8vvr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.570987 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l62lt\" (UniqueName: \"kubernetes.io/projected/c16acdaf-3bd0-4c0d-94a0-49da6c643bf8-kube-api-access-l62lt\") pod \"dns-operator-744455d44c-9vptd\" (UID: \"c16acdaf-3bd0-4c0d-94a0-49da6c643bf8\") " pod="openshift-dns-operator/dns-operator-744455d44c-9vptd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.571001 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e8b7ba94-1502-4aa8-aa07-daab4d369add-plugins-dir\") pod \"csi-hostpathplugin-9mrfh\" (UID: \"e8b7ba94-1502-4aa8-aa07-daab4d369add\") " pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.571019 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1597ed2b-7fa1-4e63-8826-6e5e3ee7d116-available-featuregates\") pod \"openshift-config-operator-7777fb866f-psbd9\" (UID: \"1597ed2b-7fa1-4e63-8826-6e5e3ee7d116\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-psbd9" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.571043 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/949e1113-3dd6-425e-9425-4208ecd1a30f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lpmfq\" (UID: \"949e1113-3dd6-425e-9425-4208ecd1a30f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpmfq" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.571060 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwsnn\" (UniqueName: \"kubernetes.io/projected/9798e02a-e958-4d10-8b6b-a9eb99ae2600-kube-api-access-rwsnn\") pod \"openshift-controller-manager-operator-756b6f6bc6-s5gxr\" (UID: \"9798e02a-e958-4d10-8b6b-a9eb99ae2600\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5gxr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.571077 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e8a8d330-b21f-4f25-b971-b880b6adee0c-images\") pod \"machine-config-operator-74547568cd-2lgxn\" (UID: \"e8a8d330-b21f-4f25-b971-b880b6adee0c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2lgxn" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.571093 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpnww\" (UniqueName: \"kubernetes.io/projected/f962e859-9339-4e87-ab2f-b4b107cf6529-kube-api-access-dpnww\") pod \"machine-config-server-kpfmp\" (UID: \"f962e859-9339-4e87-ab2f-b4b107cf6529\") " pod="openshift-machine-config-operator/machine-config-server-kpfmp" Feb 02 13:05:01 crc kubenswrapper[4955]: E0202 13:05:01.571248 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:02.071237008 +0000 UTC m=+152.983573458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.572780 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb-config-volume\") pod \"collect-profiles-29500620-mzf7f\" (UID: \"46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-mzf7f" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.573049 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d19da25f-25c6-4654-86a1-f681e982e738-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.573532 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8a8d330-b21f-4f25-b971-b880b6adee0c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2lgxn\" (UID: \"e8a8d330-b21f-4f25-b971-b880b6adee0c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2lgxn" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.574130 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/949e1113-3dd6-425e-9425-4208ecd1a30f-config\") pod \"kube-apiserver-operator-766d6c64bb-lpmfq\" (UID: \"949e1113-3dd6-425e-9425-4208ecd1a30f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpmfq" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.575741 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d19da25f-25c6-4654-86a1-f681e982e738-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.576587 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9798e02a-e958-4d10-8b6b-a9eb99ae2600-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-s5gxr\" (UID: \"9798e02a-e958-4d10-8b6b-a9eb99ae2600\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5gxr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.579007 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b9332760-2eb1-47c0-b93a-97168fc74379-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ncj99\" (UID: \"b9332760-2eb1-47c0-b93a-97168fc74379\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ncj99" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.580503 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b9332760-2eb1-47c0-b93a-97168fc74379-srv-cert\") pod \"olm-operator-6b444d44fb-ncj99\" (UID: \"b9332760-2eb1-47c0-b93a-97168fc74379\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ncj99" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.580846 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8458e825-0591-415f-960d-06357c721b4c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xh9w4\" (UID: \"8458e825-0591-415f-960d-06357c721b4c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xh9w4" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.581075 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2367ef70-be73-4dbb-ac4e-f1ce9711351d-serving-cert\") pod \"etcd-operator-b45778765-mbssd\" (UID: \"2367ef70-be73-4dbb-ac4e-f1ce9711351d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mbssd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.581338 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e8a8d330-b21f-4f25-b971-b880b6adee0c-images\") pod \"machine-config-operator-74547568cd-2lgxn\" (UID: \"e8a8d330-b21f-4f25-b971-b880b6adee0c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2lgxn" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.581934 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cf0c869-fa36-4262-ae8d-aae0bf0f5f00-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ptgb8\" (UID: \"9cf0c869-fa36-4262-ae8d-aae0bf0f5f00\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ptgb8" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.583089 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e17bf741-cd77-4d87-aea5-663e5d2ba319-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-74zm4\" (UID: \"e17bf741-cd77-4d87-aea5-663e5d2ba319\") " pod="openshift-marketplace/marketplace-operator-79b997595-74zm4" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.583358 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c16acdaf-3bd0-4c0d-94a0-49da6c643bf8-metrics-tls\") pod \"dns-operator-744455d44c-9vptd\" (UID: \"c16acdaf-3bd0-4c0d-94a0-49da6c643bf8\") " pod="openshift-dns-operator/dns-operator-744455d44c-9vptd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.583383 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07fab9a9-1e1c-442c-88e5-b5add57beff5-trusted-ca\") pod \"ingress-operator-5b745b69d9-m8vvr\" (UID: \"07fab9a9-1e1c-442c-88e5-b5add57beff5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m8vvr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.584225 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23cd2aa2-9b0c-4cab-9252-c8cd4e749a44-cert\") pod \"ingress-canary-q54rv\" (UID: \"23cd2aa2-9b0c-4cab-9252-c8cd4e749a44\") " pod="openshift-ingress-canary/ingress-canary-q54rv" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.584258 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95f0540a-1739-40df-9242-c9c2e6ccac7f-service-ca-bundle\") pod \"router-default-5444994796-bjmqj\" (UID: \"95f0540a-1739-40df-9242-c9c2e6ccac7f\") " pod="openshift-ingress/router-default-5444994796-bjmqj" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.584737 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e8a8d330-b21f-4f25-b971-b880b6adee0c-proxy-tls\") pod \"machine-config-operator-74547568cd-2lgxn\" (UID: \"e8a8d330-b21f-4f25-b971-b880b6adee0c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2lgxn" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.584861 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1597ed2b-7fa1-4e63-8826-6e5e3ee7d116-available-featuregates\") pod \"openshift-config-operator-7777fb866f-psbd9\" (UID: \"1597ed2b-7fa1-4e63-8826-6e5e3ee7d116\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-psbd9" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.585178 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2367ef70-be73-4dbb-ac4e-f1ce9711351d-etcd-service-ca\") pod \"etcd-operator-b45778765-mbssd\" (UID: \"2367ef70-be73-4dbb-ac4e-f1ce9711351d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mbssd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.585374 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cf0c869-fa36-4262-ae8d-aae0bf0f5f00-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ptgb8\" (UID: \"9cf0c869-fa36-4262-ae8d-aae0bf0f5f00\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ptgb8" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.585770 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9798e02a-e958-4d10-8b6b-a9eb99ae2600-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-s5gxr\" (UID: \"9798e02a-e958-4d10-8b6b-a9eb99ae2600\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5gxr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.586038 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/07fab9a9-1e1c-442c-88e5-b5add57beff5-metrics-tls\") pod \"ingress-operator-5b745b69d9-m8vvr\" (UID: \"07fab9a9-1e1c-442c-88e5-b5add57beff5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m8vvr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.587537 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d19da25f-25c6-4654-86a1-f681e982e738-trusted-ca\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.589491 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f962e859-9339-4e87-ab2f-b4b107cf6529-certs\") pod \"machine-config-server-kpfmp\" (UID: \"f962e859-9339-4e87-ab2f-b4b107cf6529\") " pod="openshift-machine-config-operator/machine-config-server-kpfmp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.589792 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e17bf741-cd77-4d87-aea5-663e5d2ba319-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-74zm4\" (UID: \"e17bf741-cd77-4d87-aea5-663e5d2ba319\") " pod="openshift-marketplace/marketplace-operator-79b997595-74zm4" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.589939 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/95f0540a-1739-40df-9242-c9c2e6ccac7f-default-certificate\") pod \"router-default-5444994796-bjmqj\" (UID: \"95f0540a-1739-40df-9242-c9c2e6ccac7f\") " pod="openshift-ingress/router-default-5444994796-bjmqj" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.590907 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1597ed2b-7fa1-4e63-8826-6e5e3ee7d116-serving-cert\") pod \"openshift-config-operator-7777fb866f-psbd9\" (UID: \"1597ed2b-7fa1-4e63-8826-6e5e3ee7d116\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-psbd9" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.591651 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ca77039-1020-4afc-bc21-3b2aeded6728-profile-collector-cert\") pod \"catalog-operator-68c6474976-9slxp\" (UID: \"8ca77039-1020-4afc-bc21-3b2aeded6728\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9slxp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.591693 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8d741e9f-0098-4689-8545-3bafe559a9c3-signing-cabundle\") pod \"service-ca-9c57cc56f-mrjsw\" (UID: \"8d741e9f-0098-4689-8545-3bafe559a9c3\") " pod="openshift-service-ca/service-ca-9c57cc56f-mrjsw" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.591912 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d19da25f-25c6-4654-86a1-f681e982e738-registry-tls\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.592222 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2367ef70-be73-4dbb-ac4e-f1ce9711351d-etcd-ca\") pod \"etcd-operator-b45778765-mbssd\" (UID: \"2367ef70-be73-4dbb-ac4e-f1ce9711351d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mbssd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.593254 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/949e1113-3dd6-425e-9425-4208ecd1a30f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lpmfq\" (UID: \"949e1113-3dd6-425e-9425-4208ecd1a30f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpmfq" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.593421 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb-secret-volume\") pod \"collect-profiles-29500620-mzf7f\" (UID: \"46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-mzf7f" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.594911 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d19da25f-25c6-4654-86a1-f681e982e738-registry-certificates\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.595003 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2367ef70-be73-4dbb-ac4e-f1ce9711351d-config\") pod \"etcd-operator-b45778765-mbssd\" (UID: \"2367ef70-be73-4dbb-ac4e-f1ce9711351d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mbssd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.595071 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f962e859-9339-4e87-ab2f-b4b107cf6529-node-bootstrap-token\") pod \"machine-config-server-kpfmp\" (UID: \"f962e859-9339-4e87-ab2f-b4b107cf6529\") " pod="openshift-machine-config-operator/machine-config-server-kpfmp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.599239 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/95f0540a-1739-40df-9242-c9c2e6ccac7f-stats-auth\") pod \"router-default-5444994796-bjmqj\" (UID: \"95f0540a-1739-40df-9242-c9c2e6ccac7f\") " pod="openshift-ingress/router-default-5444994796-bjmqj" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.601781 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg9pb" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.602041 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/95f0540a-1739-40df-9242-c9c2e6ccac7f-metrics-certs\") pod \"router-default-5444994796-bjmqj\" (UID: \"95f0540a-1739-40df-9242-c9c2e6ccac7f\") " pod="openshift-ingress/router-default-5444994796-bjmqj" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.602356 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2367ef70-be73-4dbb-ac4e-f1ce9711351d-etcd-client\") pod \"etcd-operator-b45778765-mbssd\" (UID: \"2367ef70-be73-4dbb-ac4e-f1ce9711351d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mbssd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.602431 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8d741e9f-0098-4689-8545-3bafe559a9c3-signing-key\") pod \"service-ca-9c57cc56f-mrjsw\" (UID: \"8d741e9f-0098-4689-8545-3bafe559a9c3\") " pod="openshift-service-ca/service-ca-9c57cc56f-mrjsw" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.602775 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0d48541-fa1c-496e-9d39-c5e75faa8d55-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vfbjc\" (UID: \"a0d48541-fa1c-496e-9d39-c5e75faa8d55\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vfbjc" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.603253 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ca77039-1020-4afc-bc21-3b2aeded6728-srv-cert\") pod \"catalog-operator-68c6474976-9slxp\" (UID: \"8ca77039-1020-4afc-bc21-3b2aeded6728\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9slxp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.622939 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfhzm" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.624050 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpnww\" (UniqueName: \"kubernetes.io/projected/f962e859-9339-4e87-ab2f-b4b107cf6529-kube-api-access-dpnww\") pod \"machine-config-server-kpfmp\" (UID: \"f962e859-9339-4e87-ab2f-b4b107cf6529\") " pod="openshift-machine-config-operator/machine-config-server-kpfmp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.629441 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xqqwk" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.634718 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tfwf\" (UniqueName: \"kubernetes.io/projected/8458e825-0591-415f-960d-06357c721b4c-kube-api-access-6tfwf\") pod \"control-plane-machine-set-operator-78cbb6b69f-xh9w4\" (UID: \"8458e825-0591-415f-960d-06357c721b4c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xh9w4" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.649019 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwtmb\" (UniqueName: \"kubernetes.io/projected/8d741e9f-0098-4689-8545-3bafe559a9c3-kube-api-access-dwtmb\") pod \"service-ca-9c57cc56f-mrjsw\" (UID: \"8d741e9f-0098-4689-8545-3bafe559a9c3\") " pod="openshift-service-ca/service-ca-9c57cc56f-mrjsw" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.655042 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xh9w4" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.667495 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krmhv\" (UniqueName: \"kubernetes.io/projected/1597ed2b-7fa1-4e63-8826-6e5e3ee7d116-kube-api-access-krmhv\") pod \"openshift-config-operator-7777fb866f-psbd9\" (UID: \"1597ed2b-7fa1-4e63-8826-6e5e3ee7d116\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-psbd9" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.673473 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e8b7ba94-1502-4aa8-aa07-daab4d369add-plugins-dir\") pod \"csi-hostpathplugin-9mrfh\" (UID: \"e8b7ba94-1502-4aa8-aa07-daab4d369add\") " pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.673652 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e8b7ba94-1502-4aa8-aa07-daab4d369add-csi-data-dir\") pod \"csi-hostpathplugin-9mrfh\" (UID: \"e8b7ba94-1502-4aa8-aa07-daab4d369add\") " pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.673739 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e8b7ba94-1502-4aa8-aa07-daab4d369add-registration-dir\") pod \"csi-hostpathplugin-9mrfh\" (UID: \"e8b7ba94-1502-4aa8-aa07-daab4d369add\") " pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.673762 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ldcl\" (UniqueName: \"kubernetes.io/projected/987c78e8-da7f-41e7-be71-1eccab1829b6-kube-api-access-7ldcl\") pod \"dns-default-ddl6c\" (UID: \"987c78e8-da7f-41e7-be71-1eccab1829b6\") " pod="openshift-dns/dns-default-ddl6c" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.673799 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/987c78e8-da7f-41e7-be71-1eccab1829b6-config-volume\") pod \"dns-default-ddl6c\" (UID: \"987c78e8-da7f-41e7-be71-1eccab1829b6\") " pod="openshift-dns/dns-default-ddl6c" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.673822 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.673864 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e8b7ba94-1502-4aa8-aa07-daab4d369add-socket-dir\") pod \"csi-hostpathplugin-9mrfh\" (UID: \"e8b7ba94-1502-4aa8-aa07-daab4d369add\") " pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.673889 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klf82\" (UniqueName: \"kubernetes.io/projected/e8b7ba94-1502-4aa8-aa07-daab4d369add-kube-api-access-klf82\") pod \"csi-hostpathplugin-9mrfh\" (UID: \"e8b7ba94-1502-4aa8-aa07-daab4d369add\") " pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.673907 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e8b7ba94-1502-4aa8-aa07-daab4d369add-mountpoint-dir\") pod \"csi-hostpathplugin-9mrfh\" (UID: \"e8b7ba94-1502-4aa8-aa07-daab4d369add\") " pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.673937 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/987c78e8-da7f-41e7-be71-1eccab1829b6-metrics-tls\") pod \"dns-default-ddl6c\" (UID: \"987c78e8-da7f-41e7-be71-1eccab1829b6\") " pod="openshift-dns/dns-default-ddl6c" Feb 02 13:05:01 crc kubenswrapper[4955]: E0202 13:05:01.675264 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:02.175245749 +0000 UTC m=+153.087582199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.675781 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e8b7ba94-1502-4aa8-aa07-daab4d369add-plugins-dir\") pod \"csi-hostpathplugin-9mrfh\" (UID: \"e8b7ba94-1502-4aa8-aa07-daab4d369add\") " pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.675799 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e8b7ba94-1502-4aa8-aa07-daab4d369add-mountpoint-dir\") pod \"csi-hostpathplugin-9mrfh\" (UID: \"e8b7ba94-1502-4aa8-aa07-daab4d369add\") " pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.675845 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e8b7ba94-1502-4aa8-aa07-daab4d369add-registration-dir\") pod \"csi-hostpathplugin-9mrfh\" (UID: \"e8b7ba94-1502-4aa8-aa07-daab4d369add\") " pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.675904 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e8b7ba94-1502-4aa8-aa07-daab4d369add-csi-data-dir\") pod \"csi-hostpathplugin-9mrfh\" (UID: \"e8b7ba94-1502-4aa8-aa07-daab4d369add\") " pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.676015 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e8b7ba94-1502-4aa8-aa07-daab4d369add-socket-dir\") pod \"csi-hostpathplugin-9mrfh\" (UID: \"e8b7ba94-1502-4aa8-aa07-daab4d369add\") " pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.676057 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mrjsw" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.676658 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/987c78e8-da7f-41e7-be71-1eccab1829b6-config-volume\") pod \"dns-default-ddl6c\" (UID: \"987c78e8-da7f-41e7-be71-1eccab1829b6\") " pod="openshift-dns/dns-default-ddl6c" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.679545 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/987c78e8-da7f-41e7-be71-1eccab1829b6-metrics-tls\") pod \"dns-default-ddl6c\" (UID: \"987c78e8-da7f-41e7-be71-1eccab1829b6\") " pod="openshift-dns/dns-default-ddl6c" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.694580 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz954\" (UniqueName: \"kubernetes.io/projected/8ca77039-1020-4afc-bc21-3b2aeded6728-kube-api-access-bz954\") pod \"catalog-operator-68c6474976-9slxp\" (UID: \"8ca77039-1020-4afc-bc21-3b2aeded6728\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9slxp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.706857 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q87q\" (UniqueName: \"kubernetes.io/projected/46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb-kube-api-access-5q87q\") pod \"collect-profiles-29500620-mzf7f\" (UID: \"46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-mzf7f" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.707228 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vbhpf" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.720530 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-mzf7f" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.728323 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9slxp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.732219 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d19da25f-25c6-4654-86a1-f681e982e738-bound-sa-token\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.741305 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kpfmp" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.750153 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/949e1113-3dd6-425e-9425-4208ecd1a30f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lpmfq\" (UID: \"949e1113-3dd6-425e-9425-4208ecd1a30f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpmfq" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.770796 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7mg8\" (UniqueName: \"kubernetes.io/projected/32e7b84e-a5b9-476f-a3f0-a4227db1d8e8-kube-api-access-x7mg8\") pod \"migrator-59844c95c7-j2pcs\" (UID: \"32e7b84e-a5b9-476f-a3f0-a4227db1d8e8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pcs" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.775118 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:01 crc kubenswrapper[4955]: E0202 13:05:01.775520 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:02.275504273 +0000 UTC m=+153.187840723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.788969 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-psbd9" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.803947 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6shmp\" (UniqueName: \"kubernetes.io/projected/95f0540a-1739-40df-9242-c9c2e6ccac7f-kube-api-access-6shmp\") pod \"router-default-5444994796-bjmqj\" (UID: \"95f0540a-1739-40df-9242-c9c2e6ccac7f\") " pod="openshift-ingress/router-default-5444994796-bjmqj" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.812103 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0d48541-fa1c-496e-9d39-c5e75faa8d55-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vfbjc\" (UID: \"a0d48541-fa1c-496e-9d39-c5e75faa8d55\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vfbjc" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.814678 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mj5sp"] Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.839453 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lb4b"] Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.840851 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g44jq"] Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.852677 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96q9r\" (UniqueName: \"kubernetes.io/projected/d19da25f-25c6-4654-86a1-f681e982e738-kube-api-access-96q9r\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.871538 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgvft\" (UniqueName: \"kubernetes.io/projected/07fab9a9-1e1c-442c-88e5-b5add57beff5-kube-api-access-lgvft\") pod \"ingress-operator-5b745b69d9-m8vvr\" (UID: \"07fab9a9-1e1c-442c-88e5-b5add57beff5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m8vvr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.876518 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:01 crc kubenswrapper[4955]: E0202 13:05:01.877203 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:02.37718793 +0000 UTC m=+153.289524380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.893022 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd56g\" (UniqueName: \"kubernetes.io/projected/2367ef70-be73-4dbb-ac4e-f1ce9711351d-kube-api-access-sd56g\") pod \"etcd-operator-b45778765-mbssd\" (UID: \"2367ef70-be73-4dbb-ac4e-f1ce9711351d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mbssd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.906940 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-bjmqj" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.910699 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/07fab9a9-1e1c-442c-88e5-b5add57beff5-bound-sa-token\") pod \"ingress-operator-5b745b69d9-m8vvr\" (UID: \"07fab9a9-1e1c-442c-88e5-b5add57beff5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m8vvr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.925239 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zbp7g"] Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.926691 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh"] Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.928805 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l62lt\" (UniqueName: \"kubernetes.io/projected/c16acdaf-3bd0-4c0d-94a0-49da6c643bf8-kube-api-access-l62lt\") pod \"dns-operator-744455d44c-9vptd\" (UID: \"c16acdaf-3bd0-4c0d-94a0-49da6c643bf8\") " pod="openshift-dns-operator/dns-operator-744455d44c-9vptd" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.929284 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlcfp" event={"ID":"38a70ee1-d1f0-4373-93b5-2132600f76b6","Type":"ContainerStarted","Data":"f22180460e098e0f1499e7828f88c439db5b20a8ce29c0c9cc9d97229dc3ffd1"} Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.929337 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlcfp" event={"ID":"38a70ee1-d1f0-4373-93b5-2132600f76b6","Type":"ContainerStarted","Data":"219f9256d7f733908249485b0980d77af815a11d946a2721ab3bf5af925b2100"} Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.932814 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" event={"ID":"586f9380-1574-4d6b-847d-d775fc1508b0","Type":"ContainerStarted","Data":"7ddc0be3671d8e54562355c1415349d15e9e9c8c471e8b1d05d8892e44a27bb6"} Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.932870 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" event={"ID":"586f9380-1574-4d6b-847d-d775fc1508b0","Type":"ContainerStarted","Data":"fa8b5b2e7b7a25b83785a0de1fd0d721e867459a1baaddc180f7ec9f3051dc12"} Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.932981 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.934600 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m8vvr" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.935541 4955 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-g2vv4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.935609 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" podUID="586f9380-1574-4d6b-847d-d775fc1508b0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.947215 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9znw" event={"ID":"ea7da83a-3612-415d-9d5b-4684e1d38cde","Type":"ContainerStarted","Data":"e6a6c1d70a9745447024300465b1c774e6be72f75e3095a585fe24291bee8cd5"} Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.947283 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9znw" event={"ID":"ea7da83a-3612-415d-9d5b-4684e1d38cde","Type":"ContainerStarted","Data":"68e5f099e55239e4ee75c772ff0093cd0d73fbd3e4d379cc6949bbb86f86eec6"} Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.947296 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9znw" event={"ID":"ea7da83a-3612-415d-9d5b-4684e1d38cde","Type":"ContainerStarted","Data":"f161136c1a66e6e5da702b30aa87e59d08b1b4096b04594f77799d6106e74763"} Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.949310 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vfbjc" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.952954 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tkkbx" event={"ID":"a1923bcc-1d3a-4205-807e-fdf37f3b08ea","Type":"ContainerStarted","Data":"778e73eb75ffcbf687989d9df100649f695af1b50b0c8af00ebc796a95908008"} Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.952995 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tkkbx" event={"ID":"a1923bcc-1d3a-4205-807e-fdf37f3b08ea","Type":"ContainerStarted","Data":"f1718e679e8c2fc97d56220ee47be9b4fc5980fd200645558bd827834c597ab3"} Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.953006 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tkkbx" event={"ID":"a1923bcc-1d3a-4205-807e-fdf37f3b08ea","Type":"ContainerStarted","Data":"6941c973afca163c4cea5e01c8b31af67fee0378c72166bc945f4a1536378dce"} Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.959364 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kpfmp" event={"ID":"f962e859-9339-4e87-ab2f-b4b107cf6529","Type":"ContainerStarted","Data":"0e659afee33ebebc7252685c5b000aa450a2b8105c9011940184c209ea7bf79f"} Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.962300 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpmfq" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.962320 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bccps\" (UniqueName: \"kubernetes.io/projected/e8a8d330-b21f-4f25-b971-b880b6adee0c-kube-api-access-bccps\") pod \"machine-config-operator-74547568cd-2lgxn\" (UID: \"e8a8d330-b21f-4f25-b971-b880b6adee0c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2lgxn" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.968474 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2lgxn" Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.969422 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcwfm\" (UniqueName: \"kubernetes.io/projected/e17bf741-cd77-4d87-aea5-663e5d2ba319-kube-api-access-pcwfm\") pod \"marketplace-operator-79b997595-74zm4\" (UID: \"e17bf741-cd77-4d87-aea5-663e5d2ba319\") " pod="openshift-marketplace/marketplace-operator-79b997595-74zm4" Feb 02 13:05:01 crc kubenswrapper[4955]: W0202 13:05:01.970364 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode88a945d_172b_40d3_938d_444a4d65bf11.slice/crio-aa6df968a8104d3871d6aee5107c9c30b2498d6ddf9174ad48084a3a4ce6bad1 WatchSource:0}: Error finding container aa6df968a8104d3871d6aee5107c9c30b2498d6ddf9174ad48084a3a4ce6bad1: Status 404 returned error can't find the container with id aa6df968a8104d3871d6aee5107c9c30b2498d6ddf9174ad48084a3a4ce6bad1 Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.973469 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mj5sp" event={"ID":"e89c5a55-99d3-4005-b90c-71041477fb75","Type":"ContainerStarted","Data":"970dab60b68bc9af4274b3488d13ee54d1f6511e87bb11e73a571b892a264f1f"} Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.979256 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:01 crc kubenswrapper[4955]: E0202 13:05:01.980222 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:02.480204037 +0000 UTC m=+153.392540487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:01 crc kubenswrapper[4955]: W0202 13:05:01.985516 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod135262fe_e63f_4d62_8260_4a90ee8c1f26.slice/crio-886ce91c06386ec84d0bebe543a31ee1d0b8d0143b88c46067f1f8884fb1c9f4 WatchSource:0}: Error finding container 886ce91c06386ec84d0bebe543a31ee1d0b8d0143b88c46067f1f8884fb1c9f4: Status 404 returned error can't find the container with id 886ce91c06386ec84d0bebe543a31ee1d0b8d0143b88c46067f1f8884fb1c9f4 Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.985650 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-q7gnb" event={"ID":"aa2e4282-fadd-4ef2-a933-ca151ce9acde","Type":"ContainerStarted","Data":"1d945eae600deff823e7f4f733e5d9bf355d3cd47fde6449c6670663e87a6e22"} Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.985677 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-q7gnb" event={"ID":"aa2e4282-fadd-4ef2-a933-ca151ce9acde","Type":"ContainerStarted","Data":"057def1db76f2638fe9fb162457db3d80495d809ab28253eb8eec02939bf912a"} Feb 02 13:05:01 crc kubenswrapper[4955]: I0202 13:05:01.986253 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-q7gnb" Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:01.996550 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwsnn\" (UniqueName: \"kubernetes.io/projected/9798e02a-e958-4d10-8b6b-a9eb99ae2600-kube-api-access-rwsnn\") pod \"openshift-controller-manager-operator-756b6f6bc6-s5gxr\" (UID: \"9798e02a-e958-4d10-8b6b-a9eb99ae2600\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5gxr" Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.012795 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cf0c869-fa36-4262-ae8d-aae0bf0f5f00-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ptgb8\" (UID: \"9cf0c869-fa36-4262-ae8d-aae0bf0f5f00\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ptgb8" Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.034458 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-74zm4" Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.035676 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pcs" Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.041447 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlx6s\" (UniqueName: \"kubernetes.io/projected/b9332760-2eb1-47c0-b93a-97168fc74379-kube-api-access-dlx6s\") pod \"olm-operator-6b444d44fb-ncj99\" (UID: \"b9332760-2eb1-47c0-b93a-97168fc74379\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ncj99" Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.056274 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88rx2\" (UniqueName: \"kubernetes.io/projected/23cd2aa2-9b0c-4cab-9252-c8cd4e749a44-kube-api-access-88rx2\") pod \"ingress-canary-q54rv\" (UID: \"23cd2aa2-9b0c-4cab-9252-c8cd4e749a44\") " pod="openshift-ingress-canary/ingress-canary-q54rv" Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.056499 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-q54rv" Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.067108 4955 patch_prober.go:28] interesting pod/console-operator-58897d9998-q7gnb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.067161 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-q7gnb" podUID="aa2e4282-fadd-4ef2-a933-ca151ce9acde" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.071187 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klf82\" (UniqueName: \"kubernetes.io/projected/e8b7ba94-1502-4aa8-aa07-daab4d369add-kube-api-access-klf82\") pod \"csi-hostpathplugin-9mrfh\" (UID: \"e8b7ba94-1502-4aa8-aa07-daab4d369add\") " pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.087458 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:02 crc kubenswrapper[4955]: E0202 13:05:02.088369 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:02.588354097 +0000 UTC m=+153.500690547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.089357 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.093939 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ldcl\" (UniqueName: \"kubernetes.io/projected/987c78e8-da7f-41e7-be71-1eccab1829b6-kube-api-access-7ldcl\") pod \"dns-default-ddl6c\" (UID: \"987c78e8-da7f-41e7-be71-1eccab1829b6\") " pod="openshift-dns/dns-default-ddl6c" Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.105337 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-mbssd" Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.117990 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ptgb8" Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.122015 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-46rd2"] Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.149042 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cm2xp"] Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.159532 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5"] Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.187794 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-9vptd" Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.191157 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:02 crc kubenswrapper[4955]: E0202 13:05:02.191737 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:02.691717793 +0000 UTC m=+153.604054243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.197946 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5gxr" Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.291505 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500620-mzf7f"] Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.292239 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:02 crc kubenswrapper[4955]: E0202 13:05:02.297837 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:02.797812564 +0000 UTC m=+153.710149014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.300329 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ncj99" Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.323669 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg9pb"] Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.328875 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xh9w4"] Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.332706 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfhzm"] Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.337282 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dmwz5"] Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.356417 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xqqwk"] Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.363034 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6tx72"] Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.365148 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-59w8t"] Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.365499 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ddl6c" Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.392935 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:02 crc kubenswrapper[4955]: E0202 13:05:02.393405 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:02.893365145 +0000 UTC m=+153.805701595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.393494 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:02 crc kubenswrapper[4955]: E0202 13:05:02.394027 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:02.894017591 +0000 UTC m=+153.806354041 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:02 crc kubenswrapper[4955]: W0202 13:05:02.418714 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46cf5e31_21a3_4f2d_b5cc_4a35eb58ccdb.slice/crio-4565151c3e7d2fc0a32b581c3807561386b72a7622e0cc8aa296e39f7151f923 WatchSource:0}: Error finding container 4565151c3e7d2fc0a32b581c3807561386b72a7622e0cc8aa296e39f7151f923: Status 404 returned error can't find the container with id 4565151c3e7d2fc0a32b581c3807561386b72a7622e0cc8aa296e39f7151f923 Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.483467 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mrjsw"] Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.495600 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:02 crc kubenswrapper[4955]: E0202 13:05:02.495966 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:02.995950914 +0000 UTC m=+153.908287364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.509276 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-q7gnb" podStartSLOduration=127.509258479 podStartE2EDuration="2m7.509258479s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:02.508952412 +0000 UTC m=+153.421288862" watchObservedRunningTime="2026-02-02 13:05:02.509258479 +0000 UTC m=+153.421594929" Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.516546 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9slxp"] Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.517918 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vbhpf"] Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.597159 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:02 crc kubenswrapper[4955]: E0202 13:05:02.597489 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:03.097477867 +0000 UTC m=+154.009814317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.599543 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-m8vvr"] Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.605810 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vfbjc"] Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.616994 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-psbd9"] Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.661294 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpmfq"] Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.704070 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:02 crc kubenswrapper[4955]: E0202 13:05:02.704282 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:03.204262814 +0000 UTC m=+154.116599264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.704731 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:02 crc kubenswrapper[4955]: E0202 13:05:02.705090 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:03.205074143 +0000 UTC m=+154.117410593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.767808 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mbssd"] Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.805438 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:02 crc kubenswrapper[4955]: E0202 13:05:02.806689 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:03.306674628 +0000 UTC m=+154.219011078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.828336 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9mrfh"] Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.839354 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2lgxn"] Feb 02 13:05:02 crc kubenswrapper[4955]: W0202 13:05:02.855689 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2367ef70_be73_4dbb_ac4e_f1ce9711351d.slice/crio-989d08866b2bf88a583c115e1da7853f391edc113e8a2c1ab24cd2aebdc7b655 WatchSource:0}: Error finding container 989d08866b2bf88a583c115e1da7853f391edc113e8a2c1ab24cd2aebdc7b655: Status 404 returned error can't find the container with id 989d08866b2bf88a583c115e1da7853f391edc113e8a2c1ab24cd2aebdc7b655 Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.863571 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-tkkbx" podStartSLOduration=127.863538763 podStartE2EDuration="2m7.863538763s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:02.861802393 +0000 UTC m=+153.774138843" watchObservedRunningTime="2026-02-02 13:05:02.863538763 +0000 UTC m=+153.775875213" Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.914193 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:02 crc kubenswrapper[4955]: E0202 13:05:02.915097 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:03.415085244 +0000 UTC m=+154.327421694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:02 crc kubenswrapper[4955]: I0202 13:05:02.950203 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-74zm4"] Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.006550 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ptgb8"] Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.016037 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:03 crc kubenswrapper[4955]: E0202 13:05:03.016315 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:03.516299559 +0000 UTC m=+154.428636009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.016602 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.016626 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.074954 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-q54rv"] Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.092268 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9znw" podStartSLOduration=128.092237347 podStartE2EDuration="2m8.092237347s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:03.087651698 +0000 UTC m=+153.999988168" watchObservedRunningTime="2026-02-02 13:05:03.092237347 +0000 UTC m=+154.004573797" Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.136268 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:03 crc kubenswrapper[4955]: E0202 13:05:03.136852 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:03.636840072 +0000 UTC m=+154.549176522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.145528 4955 csr.go:261] certificate signing request csr-5bpkx is approved, waiting to be issued Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.147011 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pcs"] Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.157806 4955 csr.go:257] certificate signing request csr-5bpkx is issued Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.161969 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xh9w4" event={"ID":"8458e825-0591-415f-960d-06357c721b4c","Type":"ContainerStarted","Data":"4022b1fc92794da4f76b8245404a8552f390a4c378896450ab91d1620c2c0e33"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.199015 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zbp7g" event={"ID":"cf79b3b7-5bd1-4877-b5ab-1141d969437c","Type":"ContainerStarted","Data":"b2798f9dd17dd261775471a14a82a620b50f95dc6bf018d6a115bb7d14d06472"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.199070 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zbp7g" event={"ID":"cf79b3b7-5bd1-4877-b5ab-1141d969437c","Type":"ContainerStarted","Data":"6d4b53657015a4cdaacbae3efd16cd5770372ef830fe2b16acd5cc2ae116ff14"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.218766 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2lgxn" event={"ID":"e8a8d330-b21f-4f25-b971-b880b6adee0c","Type":"ContainerStarted","Data":"343fa6c62d195fb4c1c2ee6d74e4179fa15fb578b23314836eca232d6a56c9cb"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.232515 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vfbjc" event={"ID":"a0d48541-fa1c-496e-9d39-c5e75faa8d55","Type":"ContainerStarted","Data":"0b5273acd351274827cceb8c645c2e2446955ff71657318fa09e383fba17bdb4"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.233114 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-mzf7f" event={"ID":"46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb","Type":"ContainerStarted","Data":"4565151c3e7d2fc0a32b581c3807561386b72a7622e0cc8aa296e39f7151f923"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.237776 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mrjsw" event={"ID":"8d741e9f-0098-4689-8545-3bafe559a9c3","Type":"ContainerStarted","Data":"73b913ab737429dea66634ca9ec4ebdddd7633fdef6ad564a80c82a60cdf7723"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.238052 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:03 crc kubenswrapper[4955]: E0202 13:05:03.238355 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:03.738332174 +0000 UTC m=+154.650668624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.265215 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5gxr"] Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.328155 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m8vvr" event={"ID":"07fab9a9-1e1c-442c-88e5-b5add57beff5","Type":"ContainerStarted","Data":"6b8c4478e21ab0406f86a7cb5732b02dcf8caff0ac484c887176c6de7cbc7dfe"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.334041 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" podStartSLOduration=128.334027299 podStartE2EDuration="2m8.334027299s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:03.297434353 +0000 UTC m=+154.209770803" watchObservedRunningTime="2026-02-02 13:05:03.334027299 +0000 UTC m=+154.246363749" Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.335174 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9vptd"] Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.337864 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpmfq" event={"ID":"949e1113-3dd6-425e-9425-4208ecd1a30f","Type":"ContainerStarted","Data":"c0e9c05f9567e0e438e6748f0c9ac4ffaca1e609d139c79b2426974d354869cc"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.347405 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:03 crc kubenswrapper[4955]: E0202 13:05:03.347822 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:03.847810495 +0000 UTC m=+154.760146945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.356486 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfhzm" event={"ID":"a5a8e507-be3d-4682-894b-249235ecc978","Type":"ContainerStarted","Data":"cdf632581b7b0be65e57a46174184e457dd3d879cd086e255c691d0e8388598d"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.357546 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kpfmp" event={"ID":"f962e859-9339-4e87-ab2f-b4b107cf6529","Type":"ContainerStarted","Data":"d5766a603e4b3c45a0173a8bfb549fd305a268d0e693b96bb67718853f553775"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.363436 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ddl6c"] Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.370047 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-mbssd" event={"ID":"2367ef70-be73-4dbb-ac4e-f1ce9711351d","Type":"ContainerStarted","Data":"989d08866b2bf88a583c115e1da7853f391edc113e8a2c1ab24cd2aebdc7b655"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.378577 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg9pb" event={"ID":"dca84748-635f-4929-9259-e64bc022a883","Type":"ContainerStarted","Data":"b5d660d1bfcc5aa179f4b81bcf9d983f7b5f37580955958ae9c77215f3172cbb"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.380203 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-psbd9" event={"ID":"1597ed2b-7fa1-4e63-8826-6e5e3ee7d116","Type":"ContainerStarted","Data":"68dd2d756ee8720965ab6f17195e10b5ddf9350a3995edfc9c463ce60067af6c"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.381664 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-46rd2" event={"ID":"e327a694-d78a-4a74-b353-dbcc4d4ce040","Type":"ContainerStarted","Data":"5ace2241ca93d619cc6c81a50a8ee393a3f1b6c926b7a95ac70e187f50e49309"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.382543 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" event={"ID":"de431f83-45ff-443c-b87c-d7ac12a3d71f","Type":"ContainerStarted","Data":"54aa8aaada423c70e67c7c7fb67cdec82171edd61eab86b6f89b205d895e01f3"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.383607 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" event={"ID":"e8b7ba94-1502-4aa8-aa07-daab4d369add","Type":"ContainerStarted","Data":"8bf82db6a96269cbd028298701eafd2caf900662df6b1aabaf89bce143f1e1bf"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.396072 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mj5sp" event={"ID":"e89c5a55-99d3-4005-b90c-71041477fb75","Type":"ContainerStarted","Data":"54de12bfebe595637e0aa63907cb9cfc1966a8067d8a79b8a8909a967f7ad6ae"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.399211 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9slxp" event={"ID":"8ca77039-1020-4afc-bc21-3b2aeded6728","Type":"ContainerStarted","Data":"0409cd1988cd3dbd7fd76eaa2d8f8bce65179e3cd63f6f802955a32d5042a733"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.408373 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g44jq" event={"ID":"e88a945d-172b-40d3-938d-444a4d65bf11","Type":"ContainerStarted","Data":"aa6df968a8104d3871d6aee5107c9c30b2498d6ddf9174ad48084a3a4ce6bad1"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.425903 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" event={"ID":"ddec21a9-43c9-4885-abde-9e65c9a8762d","Type":"ContainerStarted","Data":"8d77498f6449fec965bdb062decdc8190ebe09a6583d431e7107f84fd1978ab8"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.448828 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:03 crc kubenswrapper[4955]: E0202 13:05:03.449486 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:03.949459451 +0000 UTC m=+154.861795911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.450389 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vbhpf" event={"ID":"afc3bdeb-edeb-4acf-8e93-c72d471e5f49","Type":"ContainerStarted","Data":"8f358994b4bd6de0dafd95dcfd777e0f042c76c18822529faed4a3cc2118bdcf"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.462907 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cm2xp" event={"ID":"4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0","Type":"ContainerStarted","Data":"20b4474dd2008e9a25202e622c7cad65dc1fa86f6cc6d8b36f6725246708a09b"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.467398 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xqqwk" event={"ID":"7ea12c47-cdd1-4fa0-aea1-8142e2754bb9","Type":"ContainerStarted","Data":"6736a3a17bc0d0db6b50e04706ec4762d18a5eb7ceab9f5005439728e437a0b3"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.471200 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-59w8t" event={"ID":"e9e6e2cf-9009-4951-bd8d-7878af4bd041","Type":"ContainerStarted","Data":"7b5d563f13b8a7118f77ca6c8cbf3622a6d630a8f90c787f7b5371c0a10f5f88"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.473024 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" event={"ID":"135262fe-e63f-4d62-8260-4a90ee8c1f26","Type":"ContainerStarted","Data":"e81b06330e7fe96262de4a749472417832064487c67b16c7ef334c89f6714c86"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.473055 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" event={"ID":"135262fe-e63f-4d62-8260-4a90ee8c1f26","Type":"ContainerStarted","Data":"886ce91c06386ec84d0bebe543a31ee1d0b8d0143b88c46067f1f8884fb1c9f4"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.473332 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.486716 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6tx72" event={"ID":"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a","Type":"ContainerStarted","Data":"445810b40c8cfc35280abcb94f53041758d7ea3e1d302eccc9ac9066e2dc1ec1"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.491791 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-bjmqj" event={"ID":"95f0540a-1739-40df-9242-c9c2e6ccac7f","Type":"ContainerStarted","Data":"10a4431308e95f204ae7337a1960be76c7246925ecc3d0b3badc1e3ffd0cc377"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.491832 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-bjmqj" event={"ID":"95f0540a-1739-40df-9242-c9c2e6ccac7f","Type":"ContainerStarted","Data":"26b9a324ce19cc89992069c0f94210b2fec3add1a9e954ba314ce14807f5cbf2"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.508221 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ncj99"] Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.509745 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lb4b" event={"ID":"c7c59585-55a6-4686-a998-058c2228f134","Type":"ContainerStarted","Data":"9b1aefff84d04644265911899008235c6e9cd9d150e25cac6595ac5072769e85"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.509820 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lb4b" event={"ID":"c7c59585-55a6-4686-a998-058c2228f134","Type":"ContainerStarted","Data":"296cf5617134221b444d9d7f69a2e1d7acaa0aeef9628629bf1c8805a7f26e89"} Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.510356 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lb4b" Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.512654 4955 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-2lb4b container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" start-of-body= Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.512697 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lb4b" podUID="c7c59585-55a6-4686-a998-058c2228f134" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.531915 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.538956 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-q7gnb" Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.552948 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:03 crc kubenswrapper[4955]: E0202 13:05:03.558235 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:04.058219065 +0000 UTC m=+154.970555515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:03 crc kubenswrapper[4955]: W0202 13:05:03.572098 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod987c78e8_da7f_41e7_be71_1eccab1829b6.slice/crio-c1eb53df36584a1d4c0c0fb25e3e18b4c20865ae745cb510368ec9d16a033670 WatchSource:0}: Error finding container c1eb53df36584a1d4c0c0fb25e3e18b4c20865ae745cb510368ec9d16a033670: Status 404 returned error can't find the container with id c1eb53df36584a1d4c0c0fb25e3e18b4c20865ae745cb510368ec9d16a033670 Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.663535 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:03 crc kubenswrapper[4955]: E0202 13:05:03.665024 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:04.164992503 +0000 UTC m=+155.077328943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.672753 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:03 crc kubenswrapper[4955]: E0202 13:05:03.676753 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:04.17673599 +0000 UTC m=+155.089072440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.743126 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlcfp" podStartSLOduration=128.743108851 podStartE2EDuration="2m8.743108851s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:03.689746868 +0000 UTC m=+154.602083318" watchObservedRunningTime="2026-02-02 13:05:03.743108851 +0000 UTC m=+154.655445301" Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.777683 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:03 crc kubenswrapper[4955]: E0202 13:05:03.779599 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:04.279528702 +0000 UTC m=+155.191865152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.836115 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.855751 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" podStartSLOduration=127.855733117 podStartE2EDuration="2m7.855733117s" podCreationTimestamp="2026-02-02 13:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:03.841687604 +0000 UTC m=+154.754024054" watchObservedRunningTime="2026-02-02 13:05:03.855733117 +0000 UTC m=+154.768069567" Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.880145 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:03 crc kubenswrapper[4955]: E0202 13:05:03.880832 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:04.38082028 +0000 UTC m=+155.293156730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.909290 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-bjmqj" Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.928773 4955 patch_prober.go:28] interesting pod/router-default-5444994796-bjmqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:05:03 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Feb 02 13:05:03 crc kubenswrapper[4955]: [+]process-running ok Feb 02 13:05:03 crc kubenswrapper[4955]: healthz check failed Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.928823 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bjmqj" podUID="95f0540a-1739-40df-9242-c9c2e6ccac7f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:05:03 crc kubenswrapper[4955]: I0202 13:05:03.980836 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:03 crc kubenswrapper[4955]: E0202 13:05:03.981623 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:04.481605486 +0000 UTC m=+155.393941936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:04 crc kubenswrapper[4955]: I0202 13:05:04.032215 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kpfmp" podStartSLOduration=6.032197873 podStartE2EDuration="6.032197873s" podCreationTimestamp="2026-02-02 13:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:04.031845985 +0000 UTC m=+154.944182435" watchObservedRunningTime="2026-02-02 13:05:04.032197873 +0000 UTC m=+154.944534323" Feb 02 13:05:04 crc kubenswrapper[4955]: I0202 13:05:04.083027 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:04 crc kubenswrapper[4955]: E0202 13:05:04.083648 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:04.583638451 +0000 UTC m=+155.495974901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:04 crc kubenswrapper[4955]: I0202 13:05:04.110078 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cm2xp" podStartSLOduration=128.110057905 podStartE2EDuration="2m8.110057905s" podCreationTimestamp="2026-02-02 13:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:04.107039474 +0000 UTC m=+155.019375924" watchObservedRunningTime="2026-02-02 13:05:04.110057905 +0000 UTC m=+155.022394355" Feb 02 13:05:04 crc kubenswrapper[4955]: I0202 13:05:04.159356 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-02 13:00:03 +0000 UTC, rotation deadline is 2026-11-27 06:57:10.2355779 +0000 UTC Feb 02 13:05:04 crc kubenswrapper[4955]: I0202 13:05:04.159403 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7145h52m6.076177776s for next certificate rotation Feb 02 13:05:04 crc kubenswrapper[4955]: I0202 13:05:04.181640 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-mj5sp" podStartSLOduration=129.18162526 podStartE2EDuration="2m9.18162526s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:04.139963344 +0000 UTC m=+155.052299814" watchObservedRunningTime="2026-02-02 13:05:04.18162526 +0000 UTC m=+155.093961710" Feb 02 13:05:04 crc kubenswrapper[4955]: I0202 13:05:04.184015 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:04 crc kubenswrapper[4955]: E0202 13:05:04.184361 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:04.684344524 +0000 UTC m=+155.596680974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:04 crc kubenswrapper[4955]: I0202 13:05:04.261137 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-6tx72" podStartSLOduration=129.261123291 podStartE2EDuration="2m9.261123291s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:04.260073897 +0000 UTC m=+155.172410337" watchObservedRunningTime="2026-02-02 13:05:04.261123291 +0000 UTC m=+155.173459741" Feb 02 13:05:04 crc kubenswrapper[4955]: I0202 13:05:04.261879 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-bjmqj" podStartSLOduration=129.261873099 podStartE2EDuration="2m9.261873099s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:04.232247708 +0000 UTC m=+155.144584158" watchObservedRunningTime="2026-02-02 13:05:04.261873099 +0000 UTC m=+155.174209549" Feb 02 13:05:04 crc kubenswrapper[4955]: I0202 13:05:04.294004 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:04 crc kubenswrapper[4955]: E0202 13:05:04.294322 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:04.794311216 +0000 UTC m=+155.706647666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:04 crc kubenswrapper[4955]: I0202 13:05:04.347834 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lb4b" podStartSLOduration=128.347812032 podStartE2EDuration="2m8.347812032s" podCreationTimestamp="2026-02-02 13:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:04.302525361 +0000 UTC m=+155.214861811" watchObservedRunningTime="2026-02-02 13:05:04.347812032 +0000 UTC m=+155.260148482" Feb 02 13:05:04 crc kubenswrapper[4955]: I0202 13:05:04.407669 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:04 crc kubenswrapper[4955]: E0202 13:05:04.409838 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:04.909814981 +0000 UTC m=+155.822151431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:04 crc kubenswrapper[4955]: I0202 13:05:04.513740 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:04 crc kubenswrapper[4955]: E0202 13:05:04.514445 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:05.014430686 +0000 UTC m=+155.926767146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:04 crc kubenswrapper[4955]: I0202 13:05:04.619656 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:04 crc kubenswrapper[4955]: E0202 13:05:04.620021 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:05.120002595 +0000 UTC m=+156.032339045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:04 crc kubenswrapper[4955]: I0202 13:05:04.722470 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:04 crc kubenswrapper[4955]: E0202 13:05:04.723042 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:05.223030823 +0000 UTC m=+156.135367273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:04 crc kubenswrapper[4955]: I0202 13:05:04.753484 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg9pb" event={"ID":"dca84748-635f-4929-9259-e64bc022a883","Type":"ContainerStarted","Data":"fd66fc80b6749cf72e56e1e78b75305e36c526786e4148a2f43c3a2b5c275c45"} Feb 02 13:05:04 crc kubenswrapper[4955]: I0202 13:05:04.825634 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:04 crc kubenswrapper[4955]: E0202 13:05:04.825978 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:05.325955699 +0000 UTC m=+156.238292149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:04 crc kubenswrapper[4955]: I0202 13:05:04.913602 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ddl6c" event={"ID":"987c78e8-da7f-41e7-be71-1eccab1829b6","Type":"ContainerStarted","Data":"c1eb53df36584a1d4c0c0fb25e3e18b4c20865ae745cb510368ec9d16a033670"} Feb 02 13:05:04 crc kubenswrapper[4955]: I0202 13:05:04.920312 4955 patch_prober.go:28] interesting pod/router-default-5444994796-bjmqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:05:04 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Feb 02 13:05:04 crc kubenswrapper[4955]: [+]process-running ok Feb 02 13:05:04 crc kubenswrapper[4955]: healthz check failed Feb 02 13:05:04 crc kubenswrapper[4955]: I0202 13:05:04.920357 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bjmqj" podUID="95f0540a-1739-40df-9242-c9c2e6ccac7f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:05:04 crc kubenswrapper[4955]: I0202 13:05:04.928219 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:04 crc kubenswrapper[4955]: E0202 13:05:04.929172 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:05.429162201 +0000 UTC m=+156.341498651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:04 crc kubenswrapper[4955]: I0202 13:05:04.972919 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vbhpf" event={"ID":"afc3bdeb-edeb-4acf-8e93-c72d471e5f49","Type":"ContainerStarted","Data":"dfbcb5e37030b3e1e9b211bbf2b40e3e44d4edbb1f5f6beb64ea8a7f245c8e70"} Feb 02 13:05:04 crc kubenswrapper[4955]: I0202 13:05:04.997626 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-46rd2" event={"ID":"e327a694-d78a-4a74-b353-dbcc4d4ce040","Type":"ContainerStarted","Data":"1bade1bb05413de8e80544f8712a6129a215999c28ece913bf5c54934185b803"} Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.030578 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:05 crc kubenswrapper[4955]: E0202 13:05:05.030916 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:05.53089895 +0000 UTC m=+156.443235400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.047965 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ncj99" event={"ID":"b9332760-2eb1-47c0-b93a-97168fc74379","Type":"ContainerStarted","Data":"57c90fe745aebf31375919d038efb4912f0fbff1c35c8b7dd98bb4568d9e5f04"} Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.052270 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9vptd" event={"ID":"c16acdaf-3bd0-4c0d-94a0-49da6c643bf8","Type":"ContainerStarted","Data":"a1535d4bbdbcfd5c0e2ac82ef84757a46cdaab8dbaf59965b19d668e38579ffa"} Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.058006 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5gxr" event={"ID":"9798e02a-e958-4d10-8b6b-a9eb99ae2600","Type":"ContainerStarted","Data":"f7b2d5535f661778af9bac3f263323dfa81558fc33f71b860ca0f338daa55ae2"} Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.093179 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5gxr" podStartSLOduration=130.093163204 podStartE2EDuration="2m10.093163204s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:05.089927797 +0000 UTC m=+156.002264247" watchObservedRunningTime="2026-02-02 13:05:05.093163204 +0000 UTC m=+156.005499654" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.103869 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9slxp" event={"ID":"8ca77039-1020-4afc-bc21-3b2aeded6728","Type":"ContainerStarted","Data":"bc1cf6295e47342debe6dc5957dad79f56f99e2dce8b0d6935545da749d293d4"} Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.103995 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg9pb" podStartSLOduration=130.103973769 podStartE2EDuration="2m10.103973769s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:04.789397834 +0000 UTC m=+155.701734294" watchObservedRunningTime="2026-02-02 13:05:05.103973769 +0000 UTC m=+156.016310219" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.105029 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9slxp" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.135127 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:05 crc kubenswrapper[4955]: E0202 13:05:05.135590 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:05.635575097 +0000 UTC m=+156.547911547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.136502 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9slxp" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.146844 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9slxp" podStartSLOduration=129.146821353 podStartE2EDuration="2m9.146821353s" podCreationTimestamp="2026-02-02 13:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:05.140939544 +0000 UTC m=+156.053275994" watchObservedRunningTime="2026-02-02 13:05:05.146821353 +0000 UTC m=+156.059157803" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.170064 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xh9w4" event={"ID":"8458e825-0591-415f-960d-06357c721b4c","Type":"ContainerStarted","Data":"97dbe02a59782fe1dd5be671f1fd90d7efc88bceb4c0e4f107a7b9005b28fc13"} Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.188715 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zbp7g" event={"ID":"cf79b3b7-5bd1-4877-b5ab-1141d969437c","Type":"ContainerStarted","Data":"36073b5da56130a95495ee9d53473659e7d36660847b5db06710d11e2c4a5a1b"} Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.200153 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xh9w4" podStartSLOduration=130.200133415 podStartE2EDuration="2m10.200133415s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:05.199569012 +0000 UTC m=+156.111905462" watchObservedRunningTime="2026-02-02 13:05:05.200133415 +0000 UTC m=+156.112469865" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.227050 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m8vvr" event={"ID":"07fab9a9-1e1c-442c-88e5-b5add57beff5","Type":"ContainerStarted","Data":"2c2c74cd96ab4cc2b8559de3359074ccb0d9d392dfa6380d803920ac5693a0c0"} Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.235819 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:05 crc kubenswrapper[4955]: E0202 13:05:05.235932 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:05.735915932 +0000 UTC m=+156.648252382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.236084 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:05 crc kubenswrapper[4955]: E0202 13:05:05.237202 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:05.737186402 +0000 UTC m=+156.649522852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.239058 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xqqwk" event={"ID":"7ea12c47-cdd1-4fa0-aea1-8142e2754bb9","Type":"ContainerStarted","Data":"d5749ce3fd849c70a33be79df1fa2142e0bdf0b0df2f6efd84c4cff4eaf79fd6"} Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.241621 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zbp7g" podStartSLOduration=130.241603407 podStartE2EDuration="2m10.241603407s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:05.240410139 +0000 UTC m=+156.152746589" watchObservedRunningTime="2026-02-02 13:05:05.241603407 +0000 UTC m=+156.153939857" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.276428 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cm2xp" event={"ID":"4d4e36fe-c5e2-4e7e-8b0e-fbb6b16a98d0","Type":"ContainerStarted","Data":"403b40736d5589e441e37f84cc094e98680a7c8b34c5ada5f8b2a35177e56371"} Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.276418 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xqqwk" podStartSLOduration=130.27640263 podStartE2EDuration="2m10.27640263s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:05.275009157 +0000 UTC m=+156.187345607" watchObservedRunningTime="2026-02-02 13:05:05.27640263 +0000 UTC m=+156.188739080" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.300332 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mrjsw" event={"ID":"8d741e9f-0098-4689-8545-3bafe559a9c3","Type":"ContainerStarted","Data":"2feda48d4acc8900c49e1f41dafffaa9c770aede5aec2ee3e0d336009ae04478"} Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.301787 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ptgb8" event={"ID":"9cf0c869-fa36-4262-ae8d-aae0bf0f5f00","Type":"ContainerStarted","Data":"6db54ec08051b08c0983e107a026f0ec83b2ed30d3b56e2a16a7ddfc9a277c79"} Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.303237 4955 generic.go:334] "Generic (PLEG): container finished" podID="e88a945d-172b-40d3-938d-444a4d65bf11" containerID="8e383d68f649483593150b08c50c43ad1949208f1d4026b72827430b20dfffc1" exitCode=0 Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.303279 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g44jq" event={"ID":"e88a945d-172b-40d3-938d-444a4d65bf11","Type":"ContainerDied","Data":"8e383d68f649483593150b08c50c43ad1949208f1d4026b72827430b20dfffc1"} Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.304521 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6tx72" event={"ID":"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a","Type":"ContainerStarted","Data":"3fa83785871f30a7b32f4aa5027a51148625a42b7920dab22871252af3595f35"} Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.305827 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-74zm4" event={"ID":"e17bf741-cd77-4d87-aea5-663e5d2ba319","Type":"ContainerStarted","Data":"4de12e4233a1c36ebc00faae3a54a9c56a3e4e0b70860d427d01e8e35c6c105d"} Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.305854 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-74zm4" event={"ID":"e17bf741-cd77-4d87-aea5-663e5d2ba319","Type":"ContainerStarted","Data":"c788001b83b4f4c716c4310366d1bbc8d2f91bfa69384d64b2b7d49e59cedaf4"} Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.306415 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-74zm4" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.307474 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfhzm" event={"ID":"a5a8e507-be3d-4682-894b-249235ecc978","Type":"ContainerStarted","Data":"10a692a4655edf438f60f3863673330662daa5470990e1747022969bd2bf4af1"} Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.311361 4955 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-74zm4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.311409 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-74zm4" podUID="e17bf741-cd77-4d87-aea5-663e5d2ba319" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.312148 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" event={"ID":"ddec21a9-43c9-4885-abde-9e65c9a8762d","Type":"ContainerStarted","Data":"daa3b54a32e619bbe4ac4ab5fed8357c9e6e876192037a9dbd54b8ff10e95364"} Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.312948 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.339154 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-q54rv" event={"ID":"23cd2aa2-9b0c-4cab-9252-c8cd4e749a44","Type":"ContainerStarted","Data":"927c0657d8a136e34c6e8e0fcf529a0b0479f132842fdaf6e93938fd52492835"} Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.339449 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-q54rv" event={"ID":"23cd2aa2-9b0c-4cab-9252-c8cd4e749a44","Type":"ContainerStarted","Data":"d97d08fc473fec6bc1a55d29018ece60461e6319cfa7fef2d0e5562fbb6048bb"} Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.345180 4955 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-dmwz5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.345372 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" podUID="ddec21a9-43c9-4885-abde-9e65c9a8762d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.357209 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:05 crc kubenswrapper[4955]: E0202 13:05:05.359715 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:05.859690061 +0000 UTC m=+156.772026511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.363934 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" podStartSLOduration=130.363918382 podStartE2EDuration="2m10.363918382s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:05.363695896 +0000 UTC m=+156.276032346" watchObservedRunningTime="2026-02-02 13:05:05.363918382 +0000 UTC m=+156.276254832" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.377879 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m8vvr" podStartSLOduration=130.377863062 podStartE2EDuration="2m10.377863062s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:05.32203029 +0000 UTC m=+156.234366740" watchObservedRunningTime="2026-02-02 13:05:05.377863062 +0000 UTC m=+156.290199512" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.393234 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-mrjsw" podStartSLOduration=129.393214905 podStartE2EDuration="2m9.393214905s" podCreationTimestamp="2026-02-02 13:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:05.387609342 +0000 UTC m=+156.299945802" watchObservedRunningTime="2026-02-02 13:05:05.393214905 +0000 UTC m=+156.305551355" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.413858 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pcs" event={"ID":"32e7b84e-a5b9-476f-a3f0-a4227db1d8e8","Type":"ContainerStarted","Data":"d86c9df9f466a9f155bb70896db92de728862ab9d33a03273234e18203336079"} Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.459261 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfhzm" podStartSLOduration=130.459243008 podStartE2EDuration="2m10.459243008s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:05.458135211 +0000 UTC m=+156.370471671" watchObservedRunningTime="2026-02-02 13:05:05.459243008 +0000 UTC m=+156.371579458" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.462763 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.470008 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-mzf7f" event={"ID":"46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb","Type":"ContainerStarted","Data":"547d31ab6b277dd1010df3b53f91b5cf201d77244f4c6a5254af439377873da7"} Feb 02 13:05:05 crc kubenswrapper[4955]: E0202 13:05:05.473410 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:05.973397982 +0000 UTC m=+156.885734432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.507821 4955 generic.go:334] "Generic (PLEG): container finished" podID="1597ed2b-7fa1-4e63-8826-6e5e3ee7d116" containerID="f1606843978afb03856b91b9afd37a18c61f1369ff0bbd08a87971ce4c0a6e7e" exitCode=0 Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.507896 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-psbd9" event={"ID":"1597ed2b-7fa1-4e63-8826-6e5e3ee7d116","Type":"ContainerDied","Data":"f1606843978afb03856b91b9afd37a18c61f1369ff0bbd08a87971ce4c0a6e7e"} Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.551373 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-74zm4" podStartSLOduration=129.551357448 podStartE2EDuration="2m9.551357448s" podCreationTimestamp="2026-02-02 13:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:05.507371626 +0000 UTC m=+156.419708076" watchObservedRunningTime="2026-02-02 13:05:05.551357448 +0000 UTC m=+156.463693898" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.552083 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ptgb8" podStartSLOduration=130.552078274 podStartE2EDuration="2m10.552078274s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:05.549990205 +0000 UTC m=+156.462326655" watchObservedRunningTime="2026-02-02 13:05:05.552078274 +0000 UTC m=+156.464414724" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.563943 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:05 crc kubenswrapper[4955]: E0202 13:05:05.564532 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:06.064517799 +0000 UTC m=+156.976854249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.572093 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-59w8t" event={"ID":"e9e6e2cf-9009-4951-bd8d-7878af4bd041","Type":"ContainerStarted","Data":"ff467a479e0318dc2a3f3b22ce7145a6cb11c9de31d945a2d7686bc4287c524e"} Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.572133 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-59w8t" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.597302 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-mzf7f" podStartSLOduration=130.597286195 podStartE2EDuration="2m10.597286195s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:05.595380899 +0000 UTC m=+156.507717349" watchObservedRunningTime="2026-02-02 13:05:05.597286195 +0000 UTC m=+156.509622645" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.599012 4955 patch_prober.go:28] interesting pod/downloads-7954f5f757-59w8t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.599059 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-59w8t" podUID="e9e6e2cf-9009-4951-bd8d-7878af4bd041" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.603245 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lb4b" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.666181 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:05 crc kubenswrapper[4955]: E0202 13:05:05.673725 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:06.173712143 +0000 UTC m=+157.086048593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.726477 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vfbjc" podStartSLOduration=130.726446242 podStartE2EDuration="2m10.726446242s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:05.642013344 +0000 UTC m=+156.554349794" watchObservedRunningTime="2026-02-02 13:05:05.726446242 +0000 UTC m=+156.638782712" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.727225 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-q54rv" podStartSLOduration=7.727220399 podStartE2EDuration="7.727220399s" podCreationTimestamp="2026-02-02 13:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:05.712838689 +0000 UTC m=+156.625175139" watchObservedRunningTime="2026-02-02 13:05:05.727220399 +0000 UTC m=+156.639556849" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.771998 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:05 crc kubenswrapper[4955]: E0202 13:05:05.772249 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:06.272234916 +0000 UTC m=+157.184571366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.831277 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-59w8t" podStartSLOduration=130.831260693 podStartE2EDuration="2m10.831260693s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:05.830360951 +0000 UTC m=+156.742697391" watchObservedRunningTime="2026-02-02 13:05:05.831260693 +0000 UTC m=+156.743597143" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.886188 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:05 crc kubenswrapper[4955]: E0202 13:05:05.886455 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:06.386444248 +0000 UTC m=+157.298780698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.925992 4955 patch_prober.go:28] interesting pod/router-default-5444994796-bjmqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:05:05 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Feb 02 13:05:05 crc kubenswrapper[4955]: [+]process-running ok Feb 02 13:05:05 crc kubenswrapper[4955]: healthz check failed Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.926044 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bjmqj" podUID="95f0540a-1739-40df-9242-c9c2e6ccac7f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:05:05 crc kubenswrapper[4955]: I0202 13:05:05.987997 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:05 crc kubenswrapper[4955]: E0202 13:05:05.988356 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:06.48834255 +0000 UTC m=+157.400679000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.089845 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:06 crc kubenswrapper[4955]: E0202 13:05:06.090212 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:06.590196321 +0000 UTC m=+157.502532761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.190856 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:06 crc kubenswrapper[4955]: E0202 13:05:06.190986 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:06.690960346 +0000 UTC m=+157.603296796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.191042 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:06 crc kubenswrapper[4955]: E0202 13:05:06.191330 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:06.691317774 +0000 UTC m=+157.603654224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.291574 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:06 crc kubenswrapper[4955]: E0202 13:05:06.291765 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:06.791735851 +0000 UTC m=+157.704072301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.291841 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:06 crc kubenswrapper[4955]: E0202 13:05:06.292207 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:06.792197581 +0000 UTC m=+157.704534122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.392655 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:06 crc kubenswrapper[4955]: E0202 13:05:06.392838 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:06.892811583 +0000 UTC m=+157.805148033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.392978 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:06 crc kubenswrapper[4955]: E0202 13:05:06.393278 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:06.893265723 +0000 UTC m=+157.805602173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.494452 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:06 crc kubenswrapper[4955]: E0202 13:05:06.494648 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:06.994624163 +0000 UTC m=+157.906960613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.494951 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:06 crc kubenswrapper[4955]: E0202 13:05:06.495274 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:06.995263678 +0000 UTC m=+157.907600128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.577609 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9vptd" event={"ID":"c16acdaf-3bd0-4c0d-94a0-49da6c643bf8","Type":"ContainerStarted","Data":"5318d4de71a18d269b10f5e705695e3fb4cfc62c531d3a00343832677f04d821"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.577671 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9vptd" event={"ID":"c16acdaf-3bd0-4c0d-94a0-49da6c643bf8","Type":"ContainerStarted","Data":"7bdd428095a496fba82eafd63582d41f4008c9f1a5e833e3ea000ff89cd01519"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.579409 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vbhpf" event={"ID":"afc3bdeb-edeb-4acf-8e93-c72d471e5f49","Type":"ContainerStarted","Data":"f094c9b718063cd30b78d60b2759542262e2e7294a28677c19de70465f99e6a9"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.579536 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vbhpf" Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.580576 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5gxr" event={"ID":"9798e02a-e958-4d10-8b6b-a9eb99ae2600","Type":"ContainerStarted","Data":"3f11e115f10fba1ee8b3ad8153dadb2730abc77bc93fa042a571c48181238915"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.582468 4955 generic.go:334] "Generic (PLEG): container finished" podID="de431f83-45ff-443c-b87c-d7ac12a3d71f" containerID="86105798011fba3f38f149cdbf29480ab95316b196ee60d9104a1dbb83e1756c" exitCode=0 Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.582531 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" event={"ID":"de431f83-45ff-443c-b87c-d7ac12a3d71f","Type":"ContainerDied","Data":"86105798011fba3f38f149cdbf29480ab95316b196ee60d9104a1dbb83e1756c"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.582551 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" event={"ID":"de431f83-45ff-443c-b87c-d7ac12a3d71f","Type":"ContainerStarted","Data":"afab76dc8556b0e735f2deab031f893a0e6e1a1a8133c9d9d6f0e60e19dff3a3"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.584132 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2lgxn" event={"ID":"e8a8d330-b21f-4f25-b971-b880b6adee0c","Type":"ContainerStarted","Data":"822c86f5e8b1d13b1bc63a37891451b2bd22943faab3e2fd288f839a708a3f5b"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.584179 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2lgxn" event={"ID":"e8a8d330-b21f-4f25-b971-b880b6adee0c","Type":"ContainerStarted","Data":"bf4a683d55294c42521b722709792fc59e8224626471c3616f64f83771082195"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.585505 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfhzm" event={"ID":"a5a8e507-be3d-4682-894b-249235ecc978","Type":"ContainerStarted","Data":"ef3c0c8758864d6460aa2672c60725946e1c397d111be971d2c24315bc7c07fa"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.586492 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ptgb8" event={"ID":"9cf0c869-fa36-4262-ae8d-aae0bf0f5f00","Type":"ContainerStarted","Data":"fbd885bf4275cafcd9ffb973c83db0153108356060412a62475392448c326ac9"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.587888 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pcs" event={"ID":"32e7b84e-a5b9-476f-a3f0-a4227db1d8e8","Type":"ContainerStarted","Data":"405c79f739217eeace591c5d8e0acf7dfa9803330f98e90e486f74d7f2982d79"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.587914 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pcs" event={"ID":"32e7b84e-a5b9-476f-a3f0-a4227db1d8e8","Type":"ContainerStarted","Data":"1a0ecbed484b5830f01cc39163811e52fbda9be1f5aa27ce04f81cf7894b5653"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.589001 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-mbssd" event={"ID":"2367ef70-be73-4dbb-ac4e-f1ce9711351d","Type":"ContainerStarted","Data":"eb30e86dd5160f192199397bbe3bd9c503cc251a16941b0a87d34230c9676a61"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.590315 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m8vvr" event={"ID":"07fab9a9-1e1c-442c-88e5-b5add57beff5","Type":"ContainerStarted","Data":"21ac8269aafb66ad962e8f844a55498005f1f24d46ca066c20b6b73c9c4d0334"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.591484 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vfbjc" event={"ID":"a0d48541-fa1c-496e-9d39-c5e75faa8d55","Type":"ContainerStarted","Data":"3edd5b758b793f99b22f0bbca462df8c09db42d29d698df5c1ec3854f3d41287"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.593314 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ncj99" event={"ID":"b9332760-2eb1-47c0-b93a-97168fc74379","Type":"ContainerStarted","Data":"acd15712c4bb0b8081ad3c44d72f53cf034059620e970b196c98af922477f621"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.593574 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ncj99" Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.594593 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpmfq" event={"ID":"949e1113-3dd6-425e-9425-4208ecd1a30f","Type":"ContainerStarted","Data":"31a2ecb4dd56a073d3036268487b7a36710316db285d28b1877abb34c7f5513d"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.594792 4955 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-ncj99 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.594830 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ncj99" podUID="b9332760-2eb1-47c0-b93a-97168fc74379" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.595420 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:06 crc kubenswrapper[4955]: E0202 13:05:06.595544 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:07.095524201 +0000 UTC m=+158.007860651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.595765 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:06 crc kubenswrapper[4955]: E0202 13:05:06.596156 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:07.096122515 +0000 UTC m=+158.008458965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.596199 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-psbd9" event={"ID":"1597ed2b-7fa1-4e63-8826-6e5e3ee7d116","Type":"ContainerStarted","Data":"d96938b88f18259d936443c6a1751973a96d8ff04c92cec7440388a36d67e6e6"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.596306 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-psbd9" Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.609879 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g44jq" event={"ID":"e88a945d-172b-40d3-938d-444a4d65bf11","Type":"ContainerStarted","Data":"2cd73daa915bc16718a19e870eddf10b94aeb691239b270fec0b4ea61e7f43d9"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.609925 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g44jq" event={"ID":"e88a945d-172b-40d3-938d-444a4d65bf11","Type":"ContainerStarted","Data":"f488b93a13c69d83ee878e6c0e65a0ff11dd9bf2cfdead1e0eb0b672b50c5ec2"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.617742 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-9vptd" podStartSLOduration=131.617728486 podStartE2EDuration="2m11.617728486s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:06.616376554 +0000 UTC m=+157.528713004" watchObservedRunningTime="2026-02-02 13:05:06.617728486 +0000 UTC m=+157.530064936" Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.636374 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ncj99" podStartSLOduration=130.636357127 podStartE2EDuration="2m10.636357127s" podCreationTimestamp="2026-02-02 13:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:06.635896016 +0000 UTC m=+157.548232456" watchObservedRunningTime="2026-02-02 13:05:06.636357127 +0000 UTC m=+157.548693577" Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.645422 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ddl6c" event={"ID":"987c78e8-da7f-41e7-be71-1eccab1829b6","Type":"ContainerStarted","Data":"1a975ef373092c4dfcc375caf3dde855efe0e8408782bd653b0b326f94c38118"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.645462 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ddl6c" event={"ID":"987c78e8-da7f-41e7-be71-1eccab1829b6","Type":"ContainerStarted","Data":"b24743ebabbb98bbf0faee254eef73880d22af003e234e687b9a10df5002f1c5"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.645999 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-ddl6c" Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.660924 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2lgxn" podStartSLOduration=131.660904687 podStartE2EDuration="2m11.660904687s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:06.659135696 +0000 UTC m=+157.571472146" watchObservedRunningTime="2026-02-02 13:05:06.660904687 +0000 UTC m=+157.573241137" Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.661678 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" event={"ID":"e8b7ba94-1502-4aa8-aa07-daab4d369add","Type":"ContainerStarted","Data":"14f3819cd991917fd8f663b70bf713228245ee7a673d3a2f9e9736cea34cd55f"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.678700 4955 patch_prober.go:28] interesting pod/downloads-7954f5f757-59w8t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.678757 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-59w8t" podUID="e9e6e2cf-9009-4951-bd8d-7878af4bd041" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.678985 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-46rd2" event={"ID":"e327a694-d78a-4a74-b353-dbcc4d4ce040","Type":"ContainerStarted","Data":"0d985e5fa043b011b279c37ce47e5f540a7e90fe9ca954482d7dcd8f97fddbaa"} Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.685694 4955 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-74zm4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.685732 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-74zm4" podUID="e17bf741-cd77-4d87-aea5-663e5d2ba319" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.696125 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pcs" podStartSLOduration=131.69610517 podStartE2EDuration="2m11.69610517s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:06.686041112 +0000 UTC m=+157.598377582" watchObservedRunningTime="2026-02-02 13:05:06.69610517 +0000 UTC m=+157.608441630" Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.696725 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.696823 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:05:06 crc kubenswrapper[4955]: E0202 13:05:06.697545 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:07.197526424 +0000 UTC m=+158.109862874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.733689 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpmfq" podStartSLOduration=131.73367096 podStartE2EDuration="2m11.73367096s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:06.731877278 +0000 UTC m=+157.644213728" watchObservedRunningTime="2026-02-02 13:05:06.73367096 +0000 UTC m=+157.646007410" Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.787204 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-mbssd" podStartSLOduration=131.787183606 podStartE2EDuration="2m11.787183606s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:06.757919914 +0000 UTC m=+157.670256364" watchObservedRunningTime="2026-02-02 13:05:06.787183606 +0000 UTC m=+157.699520056" Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.788048 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-psbd9" podStartSLOduration=131.788041067 podStartE2EDuration="2m11.788041067s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:06.786659204 +0000 UTC m=+157.698995654" watchObservedRunningTime="2026-02-02 13:05:06.788041067 +0000 UTC m=+157.700377527" Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.800403 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:06 crc kubenswrapper[4955]: E0202 13:05:06.802060 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:07.302045728 +0000 UTC m=+158.214382178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.830274 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vbhpf" podStartSLOduration=130.830257516 podStartE2EDuration="2m10.830257516s" podCreationTimestamp="2026-02-02 13:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:06.829413106 +0000 UTC m=+157.741749566" watchObservedRunningTime="2026-02-02 13:05:06.830257516 +0000 UTC m=+157.742593966" Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.857469 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" podStartSLOduration=130.85745024 podStartE2EDuration="2m10.85745024s" podCreationTimestamp="2026-02-02 13:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:06.855921914 +0000 UTC m=+157.768258364" watchObservedRunningTime="2026-02-02 13:05:06.85745024 +0000 UTC m=+157.769786690" Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.884943 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-g44jq" podStartSLOduration=131.88492735 podStartE2EDuration="2m11.88492735s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:06.882993354 +0000 UTC m=+157.795329804" watchObservedRunningTime="2026-02-02 13:05:06.88492735 +0000 UTC m=+157.797263810" Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.903174 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:06 crc kubenswrapper[4955]: E0202 13:05:06.903465 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:07.403450829 +0000 UTC m=+158.315787279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.914108 4955 patch_prober.go:28] interesting pod/router-default-5444994796-bjmqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:05:06 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Feb 02 13:05:06 crc kubenswrapper[4955]: [+]process-running ok Feb 02 13:05:06 crc kubenswrapper[4955]: healthz check failed Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.914165 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bjmqj" podUID="95f0540a-1739-40df-9242-c9c2e6ccac7f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.969938 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-46rd2" podStartSLOduration=130.969917231 podStartE2EDuration="2m10.969917231s" podCreationTimestamp="2026-02-02 13:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:06.947962841 +0000 UTC m=+157.860299321" watchObservedRunningTime="2026-02-02 13:05:06.969917231 +0000 UTC m=+157.882253681" Feb 02 13:05:06 crc kubenswrapper[4955]: I0202 13:05:06.971933 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-ddl6c" podStartSLOduration=8.971926639 podStartE2EDuration="8.971926639s" podCreationTimestamp="2026-02-02 13:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:06.969474781 +0000 UTC m=+157.881811231" watchObservedRunningTime="2026-02-02 13:05:06.971926639 +0000 UTC m=+157.884263089" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.004315 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:07 crc kubenswrapper[4955]: E0202 13:05:07.004730 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:07.504710945 +0000 UTC m=+158.417047395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.006567 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rvfwg"] Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.007438 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rvfwg" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.008960 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.025509 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rvfwg"] Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.108028 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.108313 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sswlj\" (UniqueName: \"kubernetes.io/projected/bdc220a9-b1a9-4d3b-aba5-37820b63181f-kube-api-access-sswlj\") pod \"community-operators-rvfwg\" (UID: \"bdc220a9-b1a9-4d3b-aba5-37820b63181f\") " pod="openshift-marketplace/community-operators-rvfwg" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.108374 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc220a9-b1a9-4d3b-aba5-37820b63181f-utilities\") pod \"community-operators-rvfwg\" (UID: \"bdc220a9-b1a9-4d3b-aba5-37820b63181f\") " pod="openshift-marketplace/community-operators-rvfwg" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.108424 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc220a9-b1a9-4d3b-aba5-37820b63181f-catalog-content\") pod \"community-operators-rvfwg\" (UID: \"bdc220a9-b1a9-4d3b-aba5-37820b63181f\") " pod="openshift-marketplace/community-operators-rvfwg" Feb 02 13:05:07 crc kubenswrapper[4955]: E0202 13:05:07.108530 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:07.608514082 +0000 UTC m=+158.520850522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.210535 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc220a9-b1a9-4d3b-aba5-37820b63181f-catalog-content\") pod \"community-operators-rvfwg\" (UID: \"bdc220a9-b1a9-4d3b-aba5-37820b63181f\") " pod="openshift-marketplace/community-operators-rvfwg" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.210628 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sswlj\" (UniqueName: \"kubernetes.io/projected/bdc220a9-b1a9-4d3b-aba5-37820b63181f-kube-api-access-sswlj\") pod \"community-operators-rvfwg\" (UID: \"bdc220a9-b1a9-4d3b-aba5-37820b63181f\") " pod="openshift-marketplace/community-operators-rvfwg" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.210673 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.210707 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc220a9-b1a9-4d3b-aba5-37820b63181f-utilities\") pod \"community-operators-rvfwg\" (UID: \"bdc220a9-b1a9-4d3b-aba5-37820b63181f\") " pod="openshift-marketplace/community-operators-rvfwg" Feb 02 13:05:07 crc kubenswrapper[4955]: E0202 13:05:07.211058 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:07.711041328 +0000 UTC m=+158.623377778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.211058 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc220a9-b1a9-4d3b-aba5-37820b63181f-catalog-content\") pod \"community-operators-rvfwg\" (UID: \"bdc220a9-b1a9-4d3b-aba5-37820b63181f\") " pod="openshift-marketplace/community-operators-rvfwg" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.211104 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc220a9-b1a9-4d3b-aba5-37820b63181f-utilities\") pod \"community-operators-rvfwg\" (UID: \"bdc220a9-b1a9-4d3b-aba5-37820b63181f\") " pod="openshift-marketplace/community-operators-rvfwg" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.225484 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t5hd5"] Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.226384 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5hd5" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.231067 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.244355 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t5hd5"] Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.266364 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sswlj\" (UniqueName: \"kubernetes.io/projected/bdc220a9-b1a9-4d3b-aba5-37820b63181f-kube-api-access-sswlj\") pod \"community-operators-rvfwg\" (UID: \"bdc220a9-b1a9-4d3b-aba5-37820b63181f\") " pod="openshift-marketplace/community-operators-rvfwg" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.312151 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:07 crc kubenswrapper[4955]: E0202 13:05:07.312363 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:07.812333496 +0000 UTC m=+158.724669956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.312502 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aa898a0-670e-4c3b-87c8-7d1d275fc6b5-catalog-content\") pod \"certified-operators-t5hd5\" (UID: \"9aa898a0-670e-4c3b-87c8-7d1d275fc6b5\") " pod="openshift-marketplace/certified-operators-t5hd5" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.312590 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgt4t\" (UniqueName: \"kubernetes.io/projected/9aa898a0-670e-4c3b-87c8-7d1d275fc6b5-kube-api-access-rgt4t\") pod \"certified-operators-t5hd5\" (UID: \"9aa898a0-670e-4c3b-87c8-7d1d275fc6b5\") " pod="openshift-marketplace/certified-operators-t5hd5" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.312677 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.312774 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aa898a0-670e-4c3b-87c8-7d1d275fc6b5-utilities\") pod \"certified-operators-t5hd5\" (UID: \"9aa898a0-670e-4c3b-87c8-7d1d275fc6b5\") " pod="openshift-marketplace/certified-operators-t5hd5" Feb 02 13:05:07 crc kubenswrapper[4955]: E0202 13:05:07.312989 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:07.812979191 +0000 UTC m=+158.725315641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.327176 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rvfwg" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.408439 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k2nfd"] Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.409586 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k2nfd" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.414212 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.414399 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aa898a0-670e-4c3b-87c8-7d1d275fc6b5-catalog-content\") pod \"certified-operators-t5hd5\" (UID: \"9aa898a0-670e-4c3b-87c8-7d1d275fc6b5\") " pod="openshift-marketplace/certified-operators-t5hd5" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.414434 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgt4t\" (UniqueName: \"kubernetes.io/projected/9aa898a0-670e-4c3b-87c8-7d1d275fc6b5-kube-api-access-rgt4t\") pod \"certified-operators-t5hd5\" (UID: \"9aa898a0-670e-4c3b-87c8-7d1d275fc6b5\") " pod="openshift-marketplace/certified-operators-t5hd5" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.414494 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aa898a0-670e-4c3b-87c8-7d1d275fc6b5-utilities\") pod \"certified-operators-t5hd5\" (UID: \"9aa898a0-670e-4c3b-87c8-7d1d275fc6b5\") " pod="openshift-marketplace/certified-operators-t5hd5" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.415168 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aa898a0-670e-4c3b-87c8-7d1d275fc6b5-utilities\") pod \"certified-operators-t5hd5\" (UID: \"9aa898a0-670e-4c3b-87c8-7d1d275fc6b5\") " pod="openshift-marketplace/certified-operators-t5hd5" Feb 02 13:05:07 crc kubenswrapper[4955]: E0202 13:05:07.415250 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:07.915234231 +0000 UTC m=+158.827570681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.415660 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aa898a0-670e-4c3b-87c8-7d1d275fc6b5-catalog-content\") pod \"certified-operators-t5hd5\" (UID: \"9aa898a0-670e-4c3b-87c8-7d1d275fc6b5\") " pod="openshift-marketplace/certified-operators-t5hd5" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.424513 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k2nfd"] Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.450780 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgt4t\" (UniqueName: \"kubernetes.io/projected/9aa898a0-670e-4c3b-87c8-7d1d275fc6b5-kube-api-access-rgt4t\") pod \"certified-operators-t5hd5\" (UID: \"9aa898a0-670e-4c3b-87c8-7d1d275fc6b5\") " pod="openshift-marketplace/certified-operators-t5hd5" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.515500 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f1d75c2-d5d1-4281-b8cd-55f2e8f84261-utilities\") pod \"community-operators-k2nfd\" (UID: \"5f1d75c2-d5d1-4281-b8cd-55f2e8f84261\") " pod="openshift-marketplace/community-operators-k2nfd" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.515584 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f1d75c2-d5d1-4281-b8cd-55f2e8f84261-catalog-content\") pod \"community-operators-k2nfd\" (UID: \"5f1d75c2-d5d1-4281-b8cd-55f2e8f84261\") " pod="openshift-marketplace/community-operators-k2nfd" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.515665 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.515703 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqp6d\" (UniqueName: \"kubernetes.io/projected/5f1d75c2-d5d1-4281-b8cd-55f2e8f84261-kube-api-access-tqp6d\") pod \"community-operators-k2nfd\" (UID: \"5f1d75c2-d5d1-4281-b8cd-55f2e8f84261\") " pod="openshift-marketplace/community-operators-k2nfd" Feb 02 13:05:07 crc kubenswrapper[4955]: E0202 13:05:07.515978 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:08.015967415 +0000 UTC m=+158.928303855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.539823 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5hd5" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.599978 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-82zvz"] Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.600903 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-82zvz" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.610717 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-82zvz"] Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.618537 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.618714 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqp6d\" (UniqueName: \"kubernetes.io/projected/5f1d75c2-d5d1-4281-b8cd-55f2e8f84261-kube-api-access-tqp6d\") pod \"community-operators-k2nfd\" (UID: \"5f1d75c2-d5d1-4281-b8cd-55f2e8f84261\") " pod="openshift-marketplace/community-operators-k2nfd" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.618755 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f1d75c2-d5d1-4281-b8cd-55f2e8f84261-utilities\") pod \"community-operators-k2nfd\" (UID: \"5f1d75c2-d5d1-4281-b8cd-55f2e8f84261\") " pod="openshift-marketplace/community-operators-k2nfd" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.618788 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f1d75c2-d5d1-4281-b8cd-55f2e8f84261-catalog-content\") pod \"community-operators-k2nfd\" (UID: \"5f1d75c2-d5d1-4281-b8cd-55f2e8f84261\") " pod="openshift-marketplace/community-operators-k2nfd" Feb 02 13:05:07 crc kubenswrapper[4955]: E0202 13:05:07.619947 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:08.119930385 +0000 UTC m=+159.032266835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.620292 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f1d75c2-d5d1-4281-b8cd-55f2e8f84261-utilities\") pod \"community-operators-k2nfd\" (UID: \"5f1d75c2-d5d1-4281-b8cd-55f2e8f84261\") " pod="openshift-marketplace/community-operators-k2nfd" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.620718 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f1d75c2-d5d1-4281-b8cd-55f2e8f84261-catalog-content\") pod \"community-operators-k2nfd\" (UID: \"5f1d75c2-d5d1-4281-b8cd-55f2e8f84261\") " pod="openshift-marketplace/community-operators-k2nfd" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.656982 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqp6d\" (UniqueName: \"kubernetes.io/projected/5f1d75c2-d5d1-4281-b8cd-55f2e8f84261-kube-api-access-tqp6d\") pod \"community-operators-k2nfd\" (UID: \"5f1d75c2-d5d1-4281-b8cd-55f2e8f84261\") " pod="openshift-marketplace/community-operators-k2nfd" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.720406 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:07 crc kubenswrapper[4955]: E0202 13:05:07.722621 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:08.222605786 +0000 UTC m=+159.134942236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.743965 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8954d9ba-b759-4256-834b-e781be220107-catalog-content\") pod \"certified-operators-82zvz\" (UID: \"8954d9ba-b759-4256-834b-e781be220107\") " pod="openshift-marketplace/certified-operators-82zvz" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.744021 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8954d9ba-b759-4256-834b-e781be220107-utilities\") pod \"certified-operators-82zvz\" (UID: \"8954d9ba-b759-4256-834b-e781be220107\") " pod="openshift-marketplace/certified-operators-82zvz" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.744118 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8vln\" (UniqueName: \"kubernetes.io/projected/8954d9ba-b759-4256-834b-e781be220107-kube-api-access-s8vln\") pod \"certified-operators-82zvz\" (UID: \"8954d9ba-b759-4256-834b-e781be220107\") " pod="openshift-marketplace/certified-operators-82zvz" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.745844 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k2nfd" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.825610 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" event={"ID":"e8b7ba94-1502-4aa8-aa07-daab4d369add","Type":"ContainerStarted","Data":"5d95aea16cee301e404b8014421a9daea7d626a800790c1e3733992656047f2b"} Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.847067 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.847254 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8954d9ba-b759-4256-834b-e781be220107-catalog-content\") pod \"certified-operators-82zvz\" (UID: \"8954d9ba-b759-4256-834b-e781be220107\") " pod="openshift-marketplace/certified-operators-82zvz" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.847289 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8954d9ba-b759-4256-834b-e781be220107-utilities\") pod \"certified-operators-82zvz\" (UID: \"8954d9ba-b759-4256-834b-e781be220107\") " pod="openshift-marketplace/certified-operators-82zvz" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.847337 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8vln\" (UniqueName: \"kubernetes.io/projected/8954d9ba-b759-4256-834b-e781be220107-kube-api-access-s8vln\") pod \"certified-operators-82zvz\" (UID: \"8954d9ba-b759-4256-834b-e781be220107\") " pod="openshift-marketplace/certified-operators-82zvz" Feb 02 13:05:07 crc kubenswrapper[4955]: E0202 13:05:07.847695 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:08.347680926 +0000 UTC m=+159.260017366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.848033 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8954d9ba-b759-4256-834b-e781be220107-catalog-content\") pod \"certified-operators-82zvz\" (UID: \"8954d9ba-b759-4256-834b-e781be220107\") " pod="openshift-marketplace/certified-operators-82zvz" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.848300 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8954d9ba-b759-4256-834b-e781be220107-utilities\") pod \"certified-operators-82zvz\" (UID: \"8954d9ba-b759-4256-834b-e781be220107\") " pod="openshift-marketplace/certified-operators-82zvz" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.859085 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-74zm4" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.887889 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8vln\" (UniqueName: \"kubernetes.io/projected/8954d9ba-b759-4256-834b-e781be220107-kube-api-access-s8vln\") pod \"certified-operators-82zvz\" (UID: \"8954d9ba-b759-4256-834b-e781be220107\") " pod="openshift-marketplace/certified-operators-82zvz" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.932754 4955 patch_prober.go:28] interesting pod/router-default-5444994796-bjmqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:05:07 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Feb 02 13:05:07 crc kubenswrapper[4955]: [+]process-running ok Feb 02 13:05:07 crc kubenswrapper[4955]: healthz check failed Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.933079 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bjmqj" podUID="95f0540a-1739-40df-9242-c9c2e6ccac7f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.939299 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-82zvz" Feb 02 13:05:07 crc kubenswrapper[4955]: I0202 13:05:07.960208 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:07 crc kubenswrapper[4955]: E0202 13:05:07.981833 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:08.481812131 +0000 UTC m=+159.394148581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.055788 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ncj99" Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.059004 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rvfwg"] Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.061274 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:08 crc kubenswrapper[4955]: E0202 13:05:08.061610 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:08.561594379 +0000 UTC m=+159.473930829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.162229 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:08 crc kubenswrapper[4955]: E0202 13:05:08.162770 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:08.662758173 +0000 UTC m=+159.575094623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.203512 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t5hd5"] Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.240596 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k2nfd"] Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.263335 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:08 crc kubenswrapper[4955]: E0202 13:05:08.263722 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:08.763707962 +0000 UTC m=+159.676044412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.365091 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:08 crc kubenswrapper[4955]: E0202 13:05:08.365358 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:05:08.865347418 +0000 UTC m=+159.777683868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rrdgr" (UID: "d19da25f-25c6-4654-86a1-f681e982e738") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.435600 4955 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.466187 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:08 crc kubenswrapper[4955]: E0202 13:05:08.466646 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:08.966628565 +0000 UTC m=+159.878965015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.500188 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-82zvz"] Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.519083 4955 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-02T13:05:08.435825446Z","Handler":null,"Name":""} Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.527763 4955 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.527907 4955 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.568024 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.594741 4955 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.594780 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.641896 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rrdgr\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.668542 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.692881 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.830481 4955 generic.go:334] "Generic (PLEG): container finished" podID="9aa898a0-670e-4c3b-87c8-7d1d275fc6b5" containerID="75414022d4edc3884b75b5883b6ef1093f7fe2268355a31c0a44f27e9407f453" exitCode=0 Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.830571 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5hd5" event={"ID":"9aa898a0-670e-4c3b-87c8-7d1d275fc6b5","Type":"ContainerDied","Data":"75414022d4edc3884b75b5883b6ef1093f7fe2268355a31c0a44f27e9407f453"} Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.830604 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5hd5" event={"ID":"9aa898a0-670e-4c3b-87c8-7d1d275fc6b5","Type":"ContainerStarted","Data":"957c808df461c31833066d783fa88861acd3f2a4aebebfaedd0d035b49affa7e"} Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.833507 4955 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.834202 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" event={"ID":"e8b7ba94-1502-4aa8-aa07-daab4d369add","Type":"ContainerStarted","Data":"6e3853284bbc55344f924c2cedf8954643ec8c5d80bae86733eefe5902189d1a"} Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.834243 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" event={"ID":"e8b7ba94-1502-4aa8-aa07-daab4d369add","Type":"ContainerStarted","Data":"dbee7eefe7a080b8346e8716c4385642b59c21bac7c92bc5bb2eda84b1e6c1c2"} Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.839206 4955 generic.go:334] "Generic (PLEG): container finished" podID="8954d9ba-b759-4256-834b-e781be220107" containerID="6d178276a3386085b9c250a8685bd665b374c45718aa885bbfa2b62deb72334c" exitCode=0 Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.839297 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-82zvz" event={"ID":"8954d9ba-b759-4256-834b-e781be220107","Type":"ContainerDied","Data":"6d178276a3386085b9c250a8685bd665b374c45718aa885bbfa2b62deb72334c"} Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.839325 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-82zvz" event={"ID":"8954d9ba-b759-4256-834b-e781be220107","Type":"ContainerStarted","Data":"1b3de0f7d53e78d9d40b24264bdf9f06bd5e951eb64b87e9ea4a975553780a23"} Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.843933 4955 generic.go:334] "Generic (PLEG): container finished" podID="bdc220a9-b1a9-4d3b-aba5-37820b63181f" containerID="51df4ed2fb65882db625077c8cdecec209e31699c763aeb1c5f81c507a9939e7" exitCode=0 Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.844019 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvfwg" event={"ID":"bdc220a9-b1a9-4d3b-aba5-37820b63181f","Type":"ContainerDied","Data":"51df4ed2fb65882db625077c8cdecec209e31699c763aeb1c5f81c507a9939e7"} Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.844056 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvfwg" event={"ID":"bdc220a9-b1a9-4d3b-aba5-37820b63181f","Type":"ContainerStarted","Data":"475d36199c217ed87748d1a11e6d024d0f544303578681dfe04f553bb940c304"} Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.846321 4955 generic.go:334] "Generic (PLEG): container finished" podID="5f1d75c2-d5d1-4281-b8cd-55f2e8f84261" containerID="a07a0ce5314022b093ef737efd14de4890e1e5db390d4a56de5cb866f02fefc7" exitCode=0 Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.847347 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2nfd" event={"ID":"5f1d75c2-d5d1-4281-b8cd-55f2e8f84261","Type":"ContainerDied","Data":"a07a0ce5314022b093ef737efd14de4890e1e5db390d4a56de5cb866f02fefc7"} Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.847386 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2nfd" event={"ID":"5f1d75c2-d5d1-4281-b8cd-55f2e8f84261","Type":"ContainerStarted","Data":"de885ee71a272902fe1ce99d3d47bd6d7ccee347624b1b2d2db909c7de439699"} Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.899368 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-9mrfh" podStartSLOduration=10.899350126 podStartE2EDuration="10.899350126s" podCreationTimestamp="2026-02-02 13:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:08.896154751 +0000 UTC m=+159.808491201" watchObservedRunningTime="2026-02-02 13:05:08.899350126 +0000 UTC m=+159.811686576" Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.910657 4955 patch_prober.go:28] interesting pod/router-default-5444994796-bjmqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:05:08 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Feb 02 13:05:08 crc kubenswrapper[4955]: [+]process-running ok Feb 02 13:05:08 crc kubenswrapper[4955]: healthz check failed Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.910708 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bjmqj" podUID="95f0540a-1739-40df-9242-c9c2e6ccac7f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:05:08 crc kubenswrapper[4955]: I0202 13:05:08.915044 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.123970 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rrdgr"] Feb 02 13:05:09 crc kubenswrapper[4955]: W0202 13:05:09.134868 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd19da25f_25c6_4654_86a1_f681e982e738.slice/crio-5dd44c10af024d9de3471f83ab478b8ad6c88c85a4e6c641f4eaece1b800a915 WatchSource:0}: Error finding container 5dd44c10af024d9de3471f83ab478b8ad6c88c85a4e6c641f4eaece1b800a915: Status 404 returned error can't find the container with id 5dd44c10af024d9de3471f83ab478b8ad6c88c85a4e6c641f4eaece1b800a915 Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.191925 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-btrhx"] Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.193855 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btrhx" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.196442 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.202371 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-btrhx"] Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.386592 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86b11a5e-9d92-4f01-899d-51f7f9a2bbce-utilities\") pod \"redhat-marketplace-btrhx\" (UID: \"86b11a5e-9d92-4f01-899d-51f7f9a2bbce\") " pod="openshift-marketplace/redhat-marketplace-btrhx" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.386901 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86b11a5e-9d92-4f01-899d-51f7f9a2bbce-catalog-content\") pod \"redhat-marketplace-btrhx\" (UID: \"86b11a5e-9d92-4f01-899d-51f7f9a2bbce\") " pod="openshift-marketplace/redhat-marketplace-btrhx" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.386999 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56cmj\" (UniqueName: \"kubernetes.io/projected/86b11a5e-9d92-4f01-899d-51f7f9a2bbce-kube-api-access-56cmj\") pod \"redhat-marketplace-btrhx\" (UID: \"86b11a5e-9d92-4f01-899d-51f7f9a2bbce\") " pod="openshift-marketplace/redhat-marketplace-btrhx" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.488133 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86b11a5e-9d92-4f01-899d-51f7f9a2bbce-utilities\") pod \"redhat-marketplace-btrhx\" (UID: \"86b11a5e-9d92-4f01-899d-51f7f9a2bbce\") " pod="openshift-marketplace/redhat-marketplace-btrhx" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.488213 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86b11a5e-9d92-4f01-899d-51f7f9a2bbce-catalog-content\") pod \"redhat-marketplace-btrhx\" (UID: \"86b11a5e-9d92-4f01-899d-51f7f9a2bbce\") " pod="openshift-marketplace/redhat-marketplace-btrhx" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.488291 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56cmj\" (UniqueName: \"kubernetes.io/projected/86b11a5e-9d92-4f01-899d-51f7f9a2bbce-kube-api-access-56cmj\") pod \"redhat-marketplace-btrhx\" (UID: \"86b11a5e-9d92-4f01-899d-51f7f9a2bbce\") " pod="openshift-marketplace/redhat-marketplace-btrhx" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.489010 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86b11a5e-9d92-4f01-899d-51f7f9a2bbce-utilities\") pod \"redhat-marketplace-btrhx\" (UID: \"86b11a5e-9d92-4f01-899d-51f7f9a2bbce\") " pod="openshift-marketplace/redhat-marketplace-btrhx" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.489058 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86b11a5e-9d92-4f01-899d-51f7f9a2bbce-catalog-content\") pod \"redhat-marketplace-btrhx\" (UID: \"86b11a5e-9d92-4f01-899d-51f7f9a2bbce\") " pod="openshift-marketplace/redhat-marketplace-btrhx" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.510174 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56cmj\" (UniqueName: \"kubernetes.io/projected/86b11a5e-9d92-4f01-899d-51f7f9a2bbce-kube-api-access-56cmj\") pod \"redhat-marketplace-btrhx\" (UID: \"86b11a5e-9d92-4f01-899d-51f7f9a2bbce\") " pod="openshift-marketplace/redhat-marketplace-btrhx" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.523133 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btrhx" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.588359 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8pj9k"] Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.589565 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8pj9k" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.599835 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8pj9k"] Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.691196 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b349345a-3983-4555-8ad2-eb5808c83668-catalog-content\") pod \"redhat-marketplace-8pj9k\" (UID: \"b349345a-3983-4555-8ad2-eb5808c83668\") " pod="openshift-marketplace/redhat-marketplace-8pj9k" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.691499 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b349345a-3983-4555-8ad2-eb5808c83668-utilities\") pod \"redhat-marketplace-8pj9k\" (UID: \"b349345a-3983-4555-8ad2-eb5808c83668\") " pod="openshift-marketplace/redhat-marketplace-8pj9k" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.691597 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-756fq\" (UniqueName: \"kubernetes.io/projected/b349345a-3983-4555-8ad2-eb5808c83668-kube-api-access-756fq\") pod \"redhat-marketplace-8pj9k\" (UID: \"b349345a-3983-4555-8ad2-eb5808c83668\") " pod="openshift-marketplace/redhat-marketplace-8pj9k" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.724158 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.792417 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b349345a-3983-4555-8ad2-eb5808c83668-catalog-content\") pod \"redhat-marketplace-8pj9k\" (UID: \"b349345a-3983-4555-8ad2-eb5808c83668\") " pod="openshift-marketplace/redhat-marketplace-8pj9k" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.792459 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b349345a-3983-4555-8ad2-eb5808c83668-utilities\") pod \"redhat-marketplace-8pj9k\" (UID: \"b349345a-3983-4555-8ad2-eb5808c83668\") " pod="openshift-marketplace/redhat-marketplace-8pj9k" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.792507 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-756fq\" (UniqueName: \"kubernetes.io/projected/b349345a-3983-4555-8ad2-eb5808c83668-kube-api-access-756fq\") pod \"redhat-marketplace-8pj9k\" (UID: \"b349345a-3983-4555-8ad2-eb5808c83668\") " pod="openshift-marketplace/redhat-marketplace-8pj9k" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.793256 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b349345a-3983-4555-8ad2-eb5808c83668-catalog-content\") pod \"redhat-marketplace-8pj9k\" (UID: \"b349345a-3983-4555-8ad2-eb5808c83668\") " pod="openshift-marketplace/redhat-marketplace-8pj9k" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.793644 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b349345a-3983-4555-8ad2-eb5808c83668-utilities\") pod \"redhat-marketplace-8pj9k\" (UID: \"b349345a-3983-4555-8ad2-eb5808c83668\") " pod="openshift-marketplace/redhat-marketplace-8pj9k" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.824467 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-756fq\" (UniqueName: \"kubernetes.io/projected/b349345a-3983-4555-8ad2-eb5808c83668-kube-api-access-756fq\") pod \"redhat-marketplace-8pj9k\" (UID: \"b349345a-3983-4555-8ad2-eb5808c83668\") " pod="openshift-marketplace/redhat-marketplace-8pj9k" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.857685 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" event={"ID":"d19da25f-25c6-4654-86a1-f681e982e738","Type":"ContainerStarted","Data":"2ba3c6c4374ef5be37a46252d1225126e6ec5c7e937c3e56a06a27ef5d432057"} Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.857724 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.857735 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" event={"ID":"d19da25f-25c6-4654-86a1-f681e982e738","Type":"ContainerStarted","Data":"5dd44c10af024d9de3471f83ab478b8ad6c88c85a4e6c641f4eaece1b800a915"} Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.876489 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" podStartSLOduration=134.876471412 podStartE2EDuration="2m14.876471412s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:09.875480919 +0000 UTC m=+160.787817369" watchObservedRunningTime="2026-02-02 13:05:09.876471412 +0000 UTC m=+160.788807862" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.909641 4955 patch_prober.go:28] interesting pod/router-default-5444994796-bjmqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:05:09 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Feb 02 13:05:09 crc kubenswrapper[4955]: [+]process-running ok Feb 02 13:05:09 crc kubenswrapper[4955]: healthz check failed Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.909697 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bjmqj" podUID="95f0540a-1739-40df-9242-c9c2e6ccac7f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:05:09 crc kubenswrapper[4955]: I0202 13:05:09.921339 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8pj9k" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.035141 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-btrhx"] Feb 02 13:05:10 crc kubenswrapper[4955]: W0202 13:05:10.068512 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86b11a5e_9d92_4f01_899d_51f7f9a2bbce.slice/crio-ce91035d10a5ccc0f66955f900259c83bdbb843a0e08b40930e676395a2dfba5 WatchSource:0}: Error finding container ce91035d10a5ccc0f66955f900259c83bdbb843a0e08b40930e676395a2dfba5: Status 404 returned error can't find the container with id ce91035d10a5ccc0f66955f900259c83bdbb843a0e08b40930e676395a2dfba5 Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.198483 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fhpwq"] Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.199954 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhpwq" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.214117 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.240944 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fhpwq"] Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.282424 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.283275 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.286953 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.287162 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.299419 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c603a0e0-e73c-4d68-b3f5-947d61505f43-utilities\") pod \"redhat-operators-fhpwq\" (UID: \"c603a0e0-e73c-4d68-b3f5-947d61505f43\") " pod="openshift-marketplace/redhat-operators-fhpwq" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.299472 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72cmg\" (UniqueName: \"kubernetes.io/projected/c603a0e0-e73c-4d68-b3f5-947d61505f43-kube-api-access-72cmg\") pod \"redhat-operators-fhpwq\" (UID: \"c603a0e0-e73c-4d68-b3f5-947d61505f43\") " pod="openshift-marketplace/redhat-operators-fhpwq" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.299520 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c603a0e0-e73c-4d68-b3f5-947d61505f43-catalog-content\") pod \"redhat-operators-fhpwq\" (UID: \"c603a0e0-e73c-4d68-b3f5-947d61505f43\") " pod="openshift-marketplace/redhat-operators-fhpwq" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.367460 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.407727 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c603a0e0-e73c-4d68-b3f5-947d61505f43-catalog-content\") pod \"redhat-operators-fhpwq\" (UID: \"c603a0e0-e73c-4d68-b3f5-947d61505f43\") " pod="openshift-marketplace/redhat-operators-fhpwq" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.407837 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c05e27a9-a05d-40f8-845a-5ff0909c562a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c05e27a9-a05d-40f8-845a-5ff0909c562a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.407865 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c603a0e0-e73c-4d68-b3f5-947d61505f43-utilities\") pod \"redhat-operators-fhpwq\" (UID: \"c603a0e0-e73c-4d68-b3f5-947d61505f43\") " pod="openshift-marketplace/redhat-operators-fhpwq" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.407906 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72cmg\" (UniqueName: \"kubernetes.io/projected/c603a0e0-e73c-4d68-b3f5-947d61505f43-kube-api-access-72cmg\") pod \"redhat-operators-fhpwq\" (UID: \"c603a0e0-e73c-4d68-b3f5-947d61505f43\") " pod="openshift-marketplace/redhat-operators-fhpwq" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.407939 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c05e27a9-a05d-40f8-845a-5ff0909c562a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c05e27a9-a05d-40f8-845a-5ff0909c562a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.408442 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c603a0e0-e73c-4d68-b3f5-947d61505f43-catalog-content\") pod \"redhat-operators-fhpwq\" (UID: \"c603a0e0-e73c-4d68-b3f5-947d61505f43\") " pod="openshift-marketplace/redhat-operators-fhpwq" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.408743 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c603a0e0-e73c-4d68-b3f5-947d61505f43-utilities\") pod \"redhat-operators-fhpwq\" (UID: \"c603a0e0-e73c-4d68-b3f5-947d61505f43\") " pod="openshift-marketplace/redhat-operators-fhpwq" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.417493 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8pj9k"] Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.431976 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72cmg\" (UniqueName: \"kubernetes.io/projected/c603a0e0-e73c-4d68-b3f5-947d61505f43-kube-api-access-72cmg\") pod \"redhat-operators-fhpwq\" (UID: \"c603a0e0-e73c-4d68-b3f5-947d61505f43\") " pod="openshift-marketplace/redhat-operators-fhpwq" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.508620 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c05e27a9-a05d-40f8-845a-5ff0909c562a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c05e27a9-a05d-40f8-845a-5ff0909c562a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.508692 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c05e27a9-a05d-40f8-845a-5ff0909c562a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c05e27a9-a05d-40f8-845a-5ff0909c562a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.508994 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c05e27a9-a05d-40f8-845a-5ff0909c562a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c05e27a9-a05d-40f8-845a-5ff0909c562a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.533887 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhpwq" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.553839 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c05e27a9-a05d-40f8-845a-5ff0909c562a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c05e27a9-a05d-40f8-845a-5ff0909c562a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.604443 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qvvd2"] Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.605630 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvvd2" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.610277 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qvvd2"] Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.639095 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.710360 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77d0ea22-5c7c-49a5-b9d3-288ec1be5887-utilities\") pod \"redhat-operators-qvvd2\" (UID: \"77d0ea22-5c7c-49a5-b9d3-288ec1be5887\") " pod="openshift-marketplace/redhat-operators-qvvd2" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.710519 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77d0ea22-5c7c-49a5-b9d3-288ec1be5887-catalog-content\") pod \"redhat-operators-qvvd2\" (UID: \"77d0ea22-5c7c-49a5-b9d3-288ec1be5887\") " pod="openshift-marketplace/redhat-operators-qvvd2" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.710937 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7vzx\" (UniqueName: \"kubernetes.io/projected/77d0ea22-5c7c-49a5-b9d3-288ec1be5887-kube-api-access-w7vzx\") pod \"redhat-operators-qvvd2\" (UID: \"77d0ea22-5c7c-49a5-b9d3-288ec1be5887\") " pod="openshift-marketplace/redhat-operators-qvvd2" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.796303 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-psbd9" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.812698 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7vzx\" (UniqueName: \"kubernetes.io/projected/77d0ea22-5c7c-49a5-b9d3-288ec1be5887-kube-api-access-w7vzx\") pod \"redhat-operators-qvvd2\" (UID: \"77d0ea22-5c7c-49a5-b9d3-288ec1be5887\") " pod="openshift-marketplace/redhat-operators-qvvd2" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.812905 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77d0ea22-5c7c-49a5-b9d3-288ec1be5887-utilities\") pod \"redhat-operators-qvvd2\" (UID: \"77d0ea22-5c7c-49a5-b9d3-288ec1be5887\") " pod="openshift-marketplace/redhat-operators-qvvd2" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.812982 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77d0ea22-5c7c-49a5-b9d3-288ec1be5887-catalog-content\") pod \"redhat-operators-qvvd2\" (UID: \"77d0ea22-5c7c-49a5-b9d3-288ec1be5887\") " pod="openshift-marketplace/redhat-operators-qvvd2" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.813616 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77d0ea22-5c7c-49a5-b9d3-288ec1be5887-utilities\") pod \"redhat-operators-qvvd2\" (UID: \"77d0ea22-5c7c-49a5-b9d3-288ec1be5887\") " pod="openshift-marketplace/redhat-operators-qvvd2" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.813967 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77d0ea22-5c7c-49a5-b9d3-288ec1be5887-catalog-content\") pod \"redhat-operators-qvvd2\" (UID: \"77d0ea22-5c7c-49a5-b9d3-288ec1be5887\") " pod="openshift-marketplace/redhat-operators-qvvd2" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.836783 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7vzx\" (UniqueName: \"kubernetes.io/projected/77d0ea22-5c7c-49a5-b9d3-288ec1be5887-kube-api-access-w7vzx\") pod \"redhat-operators-qvvd2\" (UID: \"77d0ea22-5c7c-49a5-b9d3-288ec1be5887\") " pod="openshift-marketplace/redhat-operators-qvvd2" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.897881 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8pj9k" event={"ID":"b349345a-3983-4555-8ad2-eb5808c83668","Type":"ContainerStarted","Data":"c26fed79122916ee5c536e596e3770a17cf7cce75b70bbde598af8ff0301402d"} Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.898178 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8pj9k" event={"ID":"b349345a-3983-4555-8ad2-eb5808c83668","Type":"ContainerStarted","Data":"80027a853c041c035f7b4d76fb792f0e08cd0e6ff6fadcdf43d5e362cb2c0ca5"} Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.900289 4955 generic.go:334] "Generic (PLEG): container finished" podID="86b11a5e-9d92-4f01-899d-51f7f9a2bbce" containerID="75b2de72a39b65d35c2fc06b8a0aa9b101810f73a74cf86dad9a377a25e2963b" exitCode=0 Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.900332 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btrhx" event={"ID":"86b11a5e-9d92-4f01-899d-51f7f9a2bbce","Type":"ContainerDied","Data":"75b2de72a39b65d35c2fc06b8a0aa9b101810f73a74cf86dad9a377a25e2963b"} Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.900348 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btrhx" event={"ID":"86b11a5e-9d92-4f01-899d-51f7f9a2bbce","Type":"ContainerStarted","Data":"ce91035d10a5ccc0f66955f900259c83bdbb843a0e08b40930e676395a2dfba5"} Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.902093 4955 generic.go:334] "Generic (PLEG): container finished" podID="46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb" containerID="547d31ab6b277dd1010df3b53f91b5cf201d77244f4c6a5254af439377873da7" exitCode=0 Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.902182 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-mzf7f" event={"ID":"46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb","Type":"ContainerDied","Data":"547d31ab6b277dd1010df3b53f91b5cf201d77244f4c6a5254af439377873da7"} Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.913375 4955 patch_prober.go:28] interesting pod/router-default-5444994796-bjmqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:05:10 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Feb 02 13:05:10 crc kubenswrapper[4955]: [+]process-running ok Feb 02 13:05:10 crc kubenswrapper[4955]: healthz check failed Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.913428 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bjmqj" podUID="95f0540a-1739-40df-9242-c9c2e6ccac7f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.941586 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvvd2" Feb 02 13:05:10 crc kubenswrapper[4955]: I0202 13:05:10.968772 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 13:05:10 crc kubenswrapper[4955]: W0202 13:05:10.996040 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc05e27a9_a05d_40f8_845a_5ff0909c562a.slice/crio-7dd932bc106c4cffe598953c2389fdda653c2e98e4c60531e033f744506fc2ff WatchSource:0}: Error finding container 7dd932bc106c4cffe598953c2389fdda653c2e98e4c60531e033f744506fc2ff: Status 404 returned error can't find the container with id 7dd932bc106c4cffe598953c2389fdda653c2e98e4c60531e033f744506fc2ff Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.011764 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fhpwq"] Feb 02 13:05:11 crc kubenswrapper[4955]: W0202 13:05:11.031329 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc603a0e0_e73c_4d68_b3f5_947d61505f43.slice/crio-2047a53cd80e2cffde343678e7a0df375bab3254772ddd10c953d56180252950 WatchSource:0}: Error finding container 2047a53cd80e2cffde343678e7a0df375bab3254772ddd10c953d56180252950: Status 404 returned error can't find the container with id 2047a53cd80e2cffde343678e7a0df375bab3254772ddd10c953d56180252950 Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.275031 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qvvd2"] Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.319331 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.320505 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.328228 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.456887 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.456921 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.463810 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.473884 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.474290 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.475997 4955 patch_prober.go:28] interesting pod/console-f9d7485db-6tx72 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.476043 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6tx72" podUID="7a8cbe38-2ffd-4741-9bbe-752d1a94f72a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.501222 4955 patch_prober.go:28] interesting pod/downloads-7954f5f757-59w8t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.501295 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-59w8t" podUID="e9e6e2cf-9009-4951-bd8d-7878af4bd041" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.507870 4955 patch_prober.go:28] interesting pod/downloads-7954f5f757-59w8t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.508036 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-59w8t" podUID="e9e6e2cf-9009-4951-bd8d-7878af4bd041" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.907369 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-bjmqj" Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.911115 4955 patch_prober.go:28] interesting pod/router-default-5444994796-bjmqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:05:11 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Feb 02 13:05:11 crc kubenswrapper[4955]: [+]process-running ok Feb 02 13:05:11 crc kubenswrapper[4955]: healthz check failed Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.911189 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bjmqj" podUID="95f0540a-1739-40df-9242-c9c2e6ccac7f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.933214 4955 generic.go:334] "Generic (PLEG): container finished" podID="77d0ea22-5c7c-49a5-b9d3-288ec1be5887" containerID="672c5b00fd008ce7fb7969104f321af8b89bffc75342cf50e9bff01a0fa6714b" exitCode=0 Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.933272 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvvd2" event={"ID":"77d0ea22-5c7c-49a5-b9d3-288ec1be5887","Type":"ContainerDied","Data":"672c5b00fd008ce7fb7969104f321af8b89bffc75342cf50e9bff01a0fa6714b"} Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.933299 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvvd2" event={"ID":"77d0ea22-5c7c-49a5-b9d3-288ec1be5887","Type":"ContainerStarted","Data":"710fa8adb47195e4d44887ca6e8b2db39098a17b5e82b31e07b4aa00b2b2d869"} Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.937527 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c05e27a9-a05d-40f8-845a-5ff0909c562a","Type":"ContainerStarted","Data":"70071f1ef6c28e261480a32938b09e30eb62396ae2e7b4de8ed377cfde13eed8"} Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.937603 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c05e27a9-a05d-40f8-845a-5ff0909c562a","Type":"ContainerStarted","Data":"7dd932bc106c4cffe598953c2389fdda653c2e98e4c60531e033f744506fc2ff"} Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.943715 4955 generic.go:334] "Generic (PLEG): container finished" podID="c603a0e0-e73c-4d68-b3f5-947d61505f43" containerID="ab35a2e7ac41231e9423195c33ebc9c93351e72abf76eb9a4eef2aedea5f7a60" exitCode=0 Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.943837 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhpwq" event={"ID":"c603a0e0-e73c-4d68-b3f5-947d61505f43","Type":"ContainerDied","Data":"ab35a2e7ac41231e9423195c33ebc9c93351e72abf76eb9a4eef2aedea5f7a60"} Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.943890 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhpwq" event={"ID":"c603a0e0-e73c-4d68-b3f5-947d61505f43","Type":"ContainerStarted","Data":"2047a53cd80e2cffde343678e7a0df375bab3254772ddd10c953d56180252950"} Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.954643 4955 generic.go:334] "Generic (PLEG): container finished" podID="b349345a-3983-4555-8ad2-eb5808c83668" containerID="c26fed79122916ee5c536e596e3770a17cf7cce75b70bbde598af8ff0301402d" exitCode=0 Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.955105 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8pj9k" event={"ID":"b349345a-3983-4555-8ad2-eb5808c83668","Type":"ContainerDied","Data":"c26fed79122916ee5c536e596e3770a17cf7cce75b70bbde598af8ff0301402d"} Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.961707 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-g44jq" Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.962909 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghqn5" Feb 02 13:05:11 crc kubenswrapper[4955]: I0202 13:05:11.982073 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.982053696 podStartE2EDuration="1.982053696s" podCreationTimestamp="2026-02-02 13:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:11.978324538 +0000 UTC m=+162.890660988" watchObservedRunningTime="2026-02-02 13:05:11.982053696 +0000 UTC m=+162.894390146" Feb 02 13:05:12 crc kubenswrapper[4955]: I0202 13:05:12.395074 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-mzf7f" Feb 02 13:05:12 crc kubenswrapper[4955]: I0202 13:05:12.563079 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q87q\" (UniqueName: \"kubernetes.io/projected/46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb-kube-api-access-5q87q\") pod \"46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb\" (UID: \"46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb\") " Feb 02 13:05:12 crc kubenswrapper[4955]: I0202 13:05:12.563139 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb-secret-volume\") pod \"46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb\" (UID: \"46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb\") " Feb 02 13:05:12 crc kubenswrapper[4955]: I0202 13:05:12.563213 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb-config-volume\") pod \"46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb\" (UID: \"46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb\") " Feb 02 13:05:12 crc kubenswrapper[4955]: I0202 13:05:12.564241 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb-config-volume" (OuterVolumeSpecName: "config-volume") pod "46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb" (UID: "46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:05:12 crc kubenswrapper[4955]: I0202 13:05:12.574836 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb" (UID: "46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:05:12 crc kubenswrapper[4955]: I0202 13:05:12.584387 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb-kube-api-access-5q87q" (OuterVolumeSpecName: "kube-api-access-5q87q") pod "46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb" (UID: "46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb"). InnerVolumeSpecName "kube-api-access-5q87q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:05:12 crc kubenswrapper[4955]: I0202 13:05:12.664374 4955 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:12 crc kubenswrapper[4955]: I0202 13:05:12.664411 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q87q\" (UniqueName: \"kubernetes.io/projected/46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb-kube-api-access-5q87q\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:12 crc kubenswrapper[4955]: I0202 13:05:12.664424 4955 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:12 crc kubenswrapper[4955]: I0202 13:05:12.916466 4955 patch_prober.go:28] interesting pod/router-default-5444994796-bjmqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:05:12 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Feb 02 13:05:12 crc kubenswrapper[4955]: [+]process-running ok Feb 02 13:05:12 crc kubenswrapper[4955]: healthz check failed Feb 02 13:05:12 crc kubenswrapper[4955]: I0202 13:05:12.916812 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bjmqj" podUID="95f0540a-1739-40df-9242-c9c2e6ccac7f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:05:12 crc kubenswrapper[4955]: I0202 13:05:12.983299 4955 generic.go:334] "Generic (PLEG): container finished" podID="c05e27a9-a05d-40f8-845a-5ff0909c562a" containerID="70071f1ef6c28e261480a32938b09e30eb62396ae2e7b4de8ed377cfde13eed8" exitCode=0 Feb 02 13:05:12 crc kubenswrapper[4955]: I0202 13:05:12.983371 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c05e27a9-a05d-40f8-845a-5ff0909c562a","Type":"ContainerDied","Data":"70071f1ef6c28e261480a32938b09e30eb62396ae2e7b4de8ed377cfde13eed8"} Feb 02 13:05:12 crc kubenswrapper[4955]: I0202 13:05:12.987697 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-mzf7f" event={"ID":"46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb","Type":"ContainerDied","Data":"4565151c3e7d2fc0a32b581c3807561386b72a7622e0cc8aa296e39f7151f923"} Feb 02 13:05:12 crc kubenswrapper[4955]: I0202 13:05:12.987745 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4565151c3e7d2fc0a32b581c3807561386b72a7622e0cc8aa296e39f7151f923" Feb 02 13:05:12 crc kubenswrapper[4955]: I0202 13:05:12.987839 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-mzf7f" Feb 02 13:05:13 crc kubenswrapper[4955]: I0202 13:05:13.736394 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 13:05:13 crc kubenswrapper[4955]: E0202 13:05:13.736636 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb" containerName="collect-profiles" Feb 02 13:05:13 crc kubenswrapper[4955]: I0202 13:05:13.736650 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb" containerName="collect-profiles" Feb 02 13:05:13 crc kubenswrapper[4955]: I0202 13:05:13.739450 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb" containerName="collect-profiles" Feb 02 13:05:13 crc kubenswrapper[4955]: I0202 13:05:13.743972 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 13:05:13 crc kubenswrapper[4955]: I0202 13:05:13.744077 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 13:05:13 crc kubenswrapper[4955]: I0202 13:05:13.746592 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 13:05:13 crc kubenswrapper[4955]: I0202 13:05:13.746786 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 13:05:13 crc kubenswrapper[4955]: I0202 13:05:13.884378 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d22e89ff-98eb-4f45-a14c-4a59c5eb72b2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d22e89ff-98eb-4f45-a14c-4a59c5eb72b2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 13:05:13 crc kubenswrapper[4955]: I0202 13:05:13.884422 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d22e89ff-98eb-4f45-a14c-4a59c5eb72b2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d22e89ff-98eb-4f45-a14c-4a59c5eb72b2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 13:05:13 crc kubenswrapper[4955]: I0202 13:05:13.909377 4955 patch_prober.go:28] interesting pod/router-default-5444994796-bjmqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:05:13 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Feb 02 13:05:13 crc kubenswrapper[4955]: [+]process-running ok Feb 02 13:05:13 crc kubenswrapper[4955]: healthz check failed Feb 02 13:05:13 crc kubenswrapper[4955]: I0202 13:05:13.909650 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bjmqj" podUID="95f0540a-1739-40df-9242-c9c2e6ccac7f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:05:13 crc kubenswrapper[4955]: I0202 13:05:13.986689 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d22e89ff-98eb-4f45-a14c-4a59c5eb72b2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d22e89ff-98eb-4f45-a14c-4a59c5eb72b2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 13:05:13 crc kubenswrapper[4955]: I0202 13:05:13.986742 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d22e89ff-98eb-4f45-a14c-4a59c5eb72b2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d22e89ff-98eb-4f45-a14c-4a59c5eb72b2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 13:05:13 crc kubenswrapper[4955]: I0202 13:05:13.987166 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d22e89ff-98eb-4f45-a14c-4a59c5eb72b2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d22e89ff-98eb-4f45-a14c-4a59c5eb72b2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 13:05:14 crc kubenswrapper[4955]: I0202 13:05:14.009526 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d22e89ff-98eb-4f45-a14c-4a59c5eb72b2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d22e89ff-98eb-4f45-a14c-4a59c5eb72b2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 13:05:14 crc kubenswrapper[4955]: I0202 13:05:14.068108 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 13:05:14 crc kubenswrapper[4955]: I0202 13:05:14.367014 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 13:05:14 crc kubenswrapper[4955]: I0202 13:05:14.380830 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 13:05:14 crc kubenswrapper[4955]: I0202 13:05:14.492127 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c05e27a9-a05d-40f8-845a-5ff0909c562a-kubelet-dir\") pod \"c05e27a9-a05d-40f8-845a-5ff0909c562a\" (UID: \"c05e27a9-a05d-40f8-845a-5ff0909c562a\") " Feb 02 13:05:14 crc kubenswrapper[4955]: I0202 13:05:14.492214 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c05e27a9-a05d-40f8-845a-5ff0909c562a-kube-api-access\") pod \"c05e27a9-a05d-40f8-845a-5ff0909c562a\" (UID: \"c05e27a9-a05d-40f8-845a-5ff0909c562a\") " Feb 02 13:05:14 crc kubenswrapper[4955]: I0202 13:05:14.492327 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c05e27a9-a05d-40f8-845a-5ff0909c562a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c05e27a9-a05d-40f8-845a-5ff0909c562a" (UID: "c05e27a9-a05d-40f8-845a-5ff0909c562a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:05:14 crc kubenswrapper[4955]: I0202 13:05:14.492549 4955 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c05e27a9-a05d-40f8-845a-5ff0909c562a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:14 crc kubenswrapper[4955]: I0202 13:05:14.496531 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c05e27a9-a05d-40f8-845a-5ff0909c562a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c05e27a9-a05d-40f8-845a-5ff0909c562a" (UID: "c05e27a9-a05d-40f8-845a-5ff0909c562a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:05:14 crc kubenswrapper[4955]: I0202 13:05:14.593714 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c05e27a9-a05d-40f8-845a-5ff0909c562a-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:14 crc kubenswrapper[4955]: I0202 13:05:14.910279 4955 patch_prober.go:28] interesting pod/router-default-5444994796-bjmqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:05:14 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Feb 02 13:05:14 crc kubenswrapper[4955]: [+]process-running ok Feb 02 13:05:14 crc kubenswrapper[4955]: healthz check failed Feb 02 13:05:14 crc kubenswrapper[4955]: I0202 13:05:14.910340 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bjmqj" podUID="95f0540a-1739-40df-9242-c9c2e6ccac7f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:05:15 crc kubenswrapper[4955]: I0202 13:05:15.024930 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d22e89ff-98eb-4f45-a14c-4a59c5eb72b2","Type":"ContainerStarted","Data":"dc5f7a6daa57701c79e2aa629f4f367d06f1a42a107af69a55f0330633f075af"} Feb 02 13:05:15 crc kubenswrapper[4955]: I0202 13:05:15.032810 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c05e27a9-a05d-40f8-845a-5ff0909c562a","Type":"ContainerDied","Data":"7dd932bc106c4cffe598953c2389fdda653c2e98e4c60531e033f744506fc2ff"} Feb 02 13:05:15 crc kubenswrapper[4955]: I0202 13:05:15.032857 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dd932bc106c4cffe598953c2389fdda653c2e98e4c60531e033f744506fc2ff" Feb 02 13:05:15 crc kubenswrapper[4955]: I0202 13:05:15.032927 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 13:05:15 crc kubenswrapper[4955]: I0202 13:05:15.909422 4955 patch_prober.go:28] interesting pod/router-default-5444994796-bjmqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:05:15 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Feb 02 13:05:15 crc kubenswrapper[4955]: [+]process-running ok Feb 02 13:05:15 crc kubenswrapper[4955]: healthz check failed Feb 02 13:05:15 crc kubenswrapper[4955]: I0202 13:05:15.909770 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bjmqj" podUID="95f0540a-1739-40df-9242-c9c2e6ccac7f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:05:16 crc kubenswrapper[4955]: I0202 13:05:16.053882 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d22e89ff-98eb-4f45-a14c-4a59c5eb72b2","Type":"ContainerStarted","Data":"308c2c4ae40b136bb8b4ca39262ee10232d992ea1f0ac59d16baab10afbbeede"} Feb 02 13:05:16 crc kubenswrapper[4955]: I0202 13:05:16.911047 4955 patch_prober.go:28] interesting pod/router-default-5444994796-bjmqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:05:16 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Feb 02 13:05:16 crc kubenswrapper[4955]: [+]process-running ok Feb 02 13:05:16 crc kubenswrapper[4955]: healthz check failed Feb 02 13:05:16 crc kubenswrapper[4955]: I0202 13:05:16.911096 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bjmqj" podUID="95f0540a-1739-40df-9242-c9c2e6ccac7f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:05:17 crc kubenswrapper[4955]: I0202 13:05:17.093817 4955 generic.go:334] "Generic (PLEG): container finished" podID="d22e89ff-98eb-4f45-a14c-4a59c5eb72b2" containerID="308c2c4ae40b136bb8b4ca39262ee10232d992ea1f0ac59d16baab10afbbeede" exitCode=0 Feb 02 13:05:17 crc kubenswrapper[4955]: I0202 13:05:17.093868 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d22e89ff-98eb-4f45-a14c-4a59c5eb72b2","Type":"ContainerDied","Data":"308c2c4ae40b136bb8b4ca39262ee10232d992ea1f0ac59d16baab10afbbeede"} Feb 02 13:05:17 crc kubenswrapper[4955]: I0202 13:05:17.368468 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-ddl6c" Feb 02 13:05:17 crc kubenswrapper[4955]: I0202 13:05:17.853486 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs\") pod \"network-metrics-daemon-hjcmj\" (UID: \"009c80d7-da9c-46cc-b0d2-570de04e6510\") " pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:05:17 crc kubenswrapper[4955]: I0202 13:05:17.861359 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/009c80d7-da9c-46cc-b0d2-570de04e6510-metrics-certs\") pod \"network-metrics-daemon-hjcmj\" (UID: \"009c80d7-da9c-46cc-b0d2-570de04e6510\") " pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:05:17 crc kubenswrapper[4955]: I0202 13:05:17.908886 4955 patch_prober.go:28] interesting pod/router-default-5444994796-bjmqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:05:17 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Feb 02 13:05:17 crc kubenswrapper[4955]: [+]process-running ok Feb 02 13:05:17 crc kubenswrapper[4955]: healthz check failed Feb 02 13:05:17 crc kubenswrapper[4955]: I0202 13:05:17.908974 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bjmqj" podUID="95f0540a-1739-40df-9242-c9c2e6ccac7f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:05:18 crc kubenswrapper[4955]: I0202 13:05:18.055459 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjcmj" Feb 02 13:05:18 crc kubenswrapper[4955]: I0202 13:05:18.916606 4955 patch_prober.go:28] interesting pod/router-default-5444994796-bjmqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:05:18 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Feb 02 13:05:18 crc kubenswrapper[4955]: [+]process-running ok Feb 02 13:05:18 crc kubenswrapper[4955]: healthz check failed Feb 02 13:05:18 crc kubenswrapper[4955]: I0202 13:05:18.916988 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bjmqj" podUID="95f0540a-1739-40df-9242-c9c2e6ccac7f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:05:19 crc kubenswrapper[4955]: I0202 13:05:19.909683 4955 patch_prober.go:28] interesting pod/router-default-5444994796-bjmqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:05:19 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Feb 02 13:05:19 crc kubenswrapper[4955]: [+]process-running ok Feb 02 13:05:19 crc kubenswrapper[4955]: healthz check failed Feb 02 13:05:19 crc kubenswrapper[4955]: I0202 13:05:19.909745 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bjmqj" podUID="95f0540a-1739-40df-9242-c9c2e6ccac7f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:05:20 crc kubenswrapper[4955]: I0202 13:05:20.910590 4955 patch_prober.go:28] interesting pod/router-default-5444994796-bjmqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:05:20 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Feb 02 13:05:20 crc kubenswrapper[4955]: [+]process-running ok Feb 02 13:05:20 crc kubenswrapper[4955]: healthz check failed Feb 02 13:05:20 crc kubenswrapper[4955]: I0202 13:05:20.910695 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bjmqj" podUID="95f0540a-1739-40df-9242-c9c2e6ccac7f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:05:21 crc kubenswrapper[4955]: I0202 13:05:21.474116 4955 patch_prober.go:28] interesting pod/console-f9d7485db-6tx72 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 02 13:05:21 crc kubenswrapper[4955]: I0202 13:05:21.474198 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6tx72" podUID="7a8cbe38-2ffd-4741-9bbe-752d1a94f72a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 02 13:05:21 crc kubenswrapper[4955]: I0202 13:05:21.500078 4955 patch_prober.go:28] interesting pod/downloads-7954f5f757-59w8t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 02 13:05:21 crc kubenswrapper[4955]: I0202 13:05:21.500136 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-59w8t" podUID="e9e6e2cf-9009-4951-bd8d-7878af4bd041" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 02 13:05:21 crc kubenswrapper[4955]: I0202 13:05:21.500078 4955 patch_prober.go:28] interesting pod/downloads-7954f5f757-59w8t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 02 13:05:21 crc kubenswrapper[4955]: I0202 13:05:21.500319 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-59w8t" podUID="e9e6e2cf-9009-4951-bd8d-7878af4bd041" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 02 13:05:21 crc kubenswrapper[4955]: I0202 13:05:21.910449 4955 patch_prober.go:28] interesting pod/router-default-5444994796-bjmqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:05:21 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Feb 02 13:05:21 crc kubenswrapper[4955]: [+]process-running ok Feb 02 13:05:21 crc kubenswrapper[4955]: healthz check failed Feb 02 13:05:21 crc kubenswrapper[4955]: I0202 13:05:21.910503 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bjmqj" podUID="95f0540a-1739-40df-9242-c9c2e6ccac7f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:05:22 crc kubenswrapper[4955]: I0202 13:05:22.909960 4955 patch_prober.go:28] interesting pod/router-default-5444994796-bjmqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:05:22 crc kubenswrapper[4955]: [+]has-synced ok Feb 02 13:05:22 crc kubenswrapper[4955]: [+]process-running ok Feb 02 13:05:22 crc kubenswrapper[4955]: healthz check failed Feb 02 13:05:22 crc kubenswrapper[4955]: I0202 13:05:22.910258 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bjmqj" podUID="95f0540a-1739-40df-9242-c9c2e6ccac7f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:05:23 crc kubenswrapper[4955]: I0202 13:05:23.911945 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-bjmqj" Feb 02 13:05:23 crc kubenswrapper[4955]: I0202 13:05:23.916928 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-bjmqj" Feb 02 13:05:24 crc kubenswrapper[4955]: I0202 13:05:24.584805 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 13:05:24 crc kubenswrapper[4955]: I0202 13:05:24.743117 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d22e89ff-98eb-4f45-a14c-4a59c5eb72b2-kubelet-dir\") pod \"d22e89ff-98eb-4f45-a14c-4a59c5eb72b2\" (UID: \"d22e89ff-98eb-4f45-a14c-4a59c5eb72b2\") " Feb 02 13:05:24 crc kubenswrapper[4955]: I0202 13:05:24.743224 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d22e89ff-98eb-4f45-a14c-4a59c5eb72b2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d22e89ff-98eb-4f45-a14c-4a59c5eb72b2" (UID: "d22e89ff-98eb-4f45-a14c-4a59c5eb72b2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:05:24 crc kubenswrapper[4955]: I0202 13:05:24.743547 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d22e89ff-98eb-4f45-a14c-4a59c5eb72b2-kube-api-access\") pod \"d22e89ff-98eb-4f45-a14c-4a59c5eb72b2\" (UID: \"d22e89ff-98eb-4f45-a14c-4a59c5eb72b2\") " Feb 02 13:05:24 crc kubenswrapper[4955]: I0202 13:05:24.743854 4955 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d22e89ff-98eb-4f45-a14c-4a59c5eb72b2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:24 crc kubenswrapper[4955]: I0202 13:05:24.748394 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d22e89ff-98eb-4f45-a14c-4a59c5eb72b2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d22e89ff-98eb-4f45-a14c-4a59c5eb72b2" (UID: "d22e89ff-98eb-4f45-a14c-4a59c5eb72b2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:05:24 crc kubenswrapper[4955]: I0202 13:05:24.845363 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d22e89ff-98eb-4f45-a14c-4a59c5eb72b2-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:24 crc kubenswrapper[4955]: I0202 13:05:24.935475 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hjcmj"] Feb 02 13:05:24 crc kubenswrapper[4955]: W0202 13:05:24.945359 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod009c80d7_da9c_46cc_b0d2_570de04e6510.slice/crio-e9c0bfecdcee4ebf4893ae01d7f5b9f59f179ee403d5c518d8ca54bd4a73ff9f WatchSource:0}: Error finding container e9c0bfecdcee4ebf4893ae01d7f5b9f59f179ee403d5c518d8ca54bd4a73ff9f: Status 404 returned error can't find the container with id e9c0bfecdcee4ebf4893ae01d7f5b9f59f179ee403d5c518d8ca54bd4a73ff9f Feb 02 13:05:25 crc kubenswrapper[4955]: I0202 13:05:25.180525 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d22e89ff-98eb-4f45-a14c-4a59c5eb72b2","Type":"ContainerDied","Data":"dc5f7a6daa57701c79e2aa629f4f367d06f1a42a107af69a55f0330633f075af"} Feb 02 13:05:25 crc kubenswrapper[4955]: I0202 13:05:25.180895 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc5f7a6daa57701c79e2aa629f4f367d06f1a42a107af69a55f0330633f075af" Feb 02 13:05:25 crc kubenswrapper[4955]: I0202 13:05:25.180591 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 13:05:25 crc kubenswrapper[4955]: I0202 13:05:25.182100 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hjcmj" event={"ID":"009c80d7-da9c-46cc-b0d2-570de04e6510","Type":"ContainerStarted","Data":"e9c0bfecdcee4ebf4893ae01d7f5b9f59f179ee403d5c518d8ca54bd4a73ff9f"} Feb 02 13:05:26 crc kubenswrapper[4955]: I0202 13:05:26.189031 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hjcmj" event={"ID":"009c80d7-da9c-46cc-b0d2-570de04e6510","Type":"ContainerStarted","Data":"74a5cb9e83dfb14b2ef7a4744beab52624487360904e99b4a6ab08d349ed091f"} Feb 02 13:05:28 crc kubenswrapper[4955]: I0202 13:05:28.920971 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:05:31 crc kubenswrapper[4955]: I0202 13:05:31.490953 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:05:31 crc kubenswrapper[4955]: I0202 13:05:31.496873 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:05:31 crc kubenswrapper[4955]: I0202 13:05:31.524638 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-59w8t" Feb 02 13:05:33 crc kubenswrapper[4955]: I0202 13:05:33.017365 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:05:33 crc kubenswrapper[4955]: I0202 13:05:33.017449 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:05:37 crc kubenswrapper[4955]: I0202 13:05:37.944717 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:05:40 crc kubenswrapper[4955]: E0202 13:05:40.246596 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 02 13:05:40 crc kubenswrapper[4955]: E0202 13:05:40.247310 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-756fq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8pj9k_openshift-marketplace(b349345a-3983-4555-8ad2-eb5808c83668): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 13:05:40 crc kubenswrapper[4955]: E0202 13:05:40.248753 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-8pj9k" podUID="b349345a-3983-4555-8ad2-eb5808c83668" Feb 02 13:05:40 crc kubenswrapper[4955]: E0202 13:05:40.260209 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8pj9k" podUID="b349345a-3983-4555-8ad2-eb5808c83668" Feb 02 13:05:40 crc kubenswrapper[4955]: E0202 13:05:40.337370 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 02 13:05:40 crc kubenswrapper[4955]: E0202 13:05:40.337519 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72cmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-fhpwq_openshift-marketplace(c603a0e0-e73c-4d68-b3f5-947d61505f43): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 13:05:40 crc kubenswrapper[4955]: E0202 13:05:40.338799 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-fhpwq" podUID="c603a0e0-e73c-4d68-b3f5-947d61505f43" Feb 02 13:05:40 crc kubenswrapper[4955]: E0202 13:05:40.373521 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 02 13:05:40 crc kubenswrapper[4955]: E0202 13:05:40.373635 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56cmj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-btrhx_openshift-marketplace(86b11a5e-9d92-4f01-899d-51f7f9a2bbce): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 13:05:40 crc kubenswrapper[4955]: E0202 13:05:40.375387 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-btrhx" podUID="86b11a5e-9d92-4f01-899d-51f7f9a2bbce" Feb 02 13:05:41 crc kubenswrapper[4955]: I0202 13:05:41.260331 4955 generic.go:334] "Generic (PLEG): container finished" podID="77d0ea22-5c7c-49a5-b9d3-288ec1be5887" containerID="686969a4419e7790ec9f1e46a03641604f654759d97fdad7dd553243867b571e" exitCode=0 Feb 02 13:05:41 crc kubenswrapper[4955]: I0202 13:05:41.260414 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvvd2" event={"ID":"77d0ea22-5c7c-49a5-b9d3-288ec1be5887","Type":"ContainerDied","Data":"686969a4419e7790ec9f1e46a03641604f654759d97fdad7dd553243867b571e"} Feb 02 13:05:41 crc kubenswrapper[4955]: I0202 13:05:41.262516 4955 generic.go:334] "Generic (PLEG): container finished" podID="5f1d75c2-d5d1-4281-b8cd-55f2e8f84261" containerID="99728e885b71e834dc8daf73561838ad3d54dff8472fa3806549709c5a52415c" exitCode=0 Feb 02 13:05:41 crc kubenswrapper[4955]: I0202 13:05:41.262593 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2nfd" event={"ID":"5f1d75c2-d5d1-4281-b8cd-55f2e8f84261","Type":"ContainerDied","Data":"99728e885b71e834dc8daf73561838ad3d54dff8472fa3806549709c5a52415c"} Feb 02 13:05:41 crc kubenswrapper[4955]: I0202 13:05:41.266159 4955 generic.go:334] "Generic (PLEG): container finished" podID="9aa898a0-670e-4c3b-87c8-7d1d275fc6b5" containerID="de60dafc074fda1ce612d6933e1acd1cb83a510ed976a8f3001925317297122d" exitCode=0 Feb 02 13:05:41 crc kubenswrapper[4955]: I0202 13:05:41.266205 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5hd5" event={"ID":"9aa898a0-670e-4c3b-87c8-7d1d275fc6b5","Type":"ContainerDied","Data":"de60dafc074fda1ce612d6933e1acd1cb83a510ed976a8f3001925317297122d"} Feb 02 13:05:41 crc kubenswrapper[4955]: I0202 13:05:41.269726 4955 generic.go:334] "Generic (PLEG): container finished" podID="bdc220a9-b1a9-4d3b-aba5-37820b63181f" containerID="a5f11abd99e90ff090419c144a86cbe15193742ecf1c89a0afacfdb945df2336" exitCode=0 Feb 02 13:05:41 crc kubenswrapper[4955]: I0202 13:05:41.269761 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvfwg" event={"ID":"bdc220a9-b1a9-4d3b-aba5-37820b63181f","Type":"ContainerDied","Data":"a5f11abd99e90ff090419c144a86cbe15193742ecf1c89a0afacfdb945df2336"} Feb 02 13:05:41 crc kubenswrapper[4955]: I0202 13:05:41.272631 4955 generic.go:334] "Generic (PLEG): container finished" podID="8954d9ba-b759-4256-834b-e781be220107" containerID="c8658d33d980cccb449b74b16b2260b14cd54bb9af61db2a91f97856203e7103" exitCode=0 Feb 02 13:05:41 crc kubenswrapper[4955]: I0202 13:05:41.272996 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-82zvz" event={"ID":"8954d9ba-b759-4256-834b-e781be220107","Type":"ContainerDied","Data":"c8658d33d980cccb449b74b16b2260b14cd54bb9af61db2a91f97856203e7103"} Feb 02 13:05:41 crc kubenswrapper[4955]: I0202 13:05:41.279698 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hjcmj" event={"ID":"009c80d7-da9c-46cc-b0d2-570de04e6510","Type":"ContainerStarted","Data":"36250dace3b32fbc7a715c439eb988cd5cdf4e5c71abd1da706f64e49d55e809"} Feb 02 13:05:41 crc kubenswrapper[4955]: E0202 13:05:41.283540 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-btrhx" podUID="86b11a5e-9d92-4f01-899d-51f7f9a2bbce" Feb 02 13:05:41 crc kubenswrapper[4955]: E0202 13:05:41.283806 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-fhpwq" podUID="c603a0e0-e73c-4d68-b3f5-947d61505f43" Feb 02 13:05:41 crc kubenswrapper[4955]: I0202 13:05:41.399786 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-hjcmj" podStartSLOduration=166.399767237 podStartE2EDuration="2m46.399767237s" podCreationTimestamp="2026-02-02 13:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:41.396021749 +0000 UTC m=+192.308358199" watchObservedRunningTime="2026-02-02 13:05:41.399767237 +0000 UTC m=+192.312103687" Feb 02 13:05:41 crc kubenswrapper[4955]: I0202 13:05:41.711786 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vbhpf" Feb 02 13:05:42 crc kubenswrapper[4955]: I0202 13:05:42.287230 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-82zvz" event={"ID":"8954d9ba-b759-4256-834b-e781be220107","Type":"ContainerStarted","Data":"115ee7997b03925b4e47186e7d64baf9e8fe70c6824743473a62686d5d38dfef"} Feb 02 13:05:42 crc kubenswrapper[4955]: I0202 13:05:42.289461 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvfwg" event={"ID":"bdc220a9-b1a9-4d3b-aba5-37820b63181f","Type":"ContainerStarted","Data":"36435453298de3954cf6153be62a879cfdf6e43a2b2bed5305d87a5449f073e9"} Feb 02 13:05:42 crc kubenswrapper[4955]: I0202 13:05:42.291570 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvvd2" event={"ID":"77d0ea22-5c7c-49a5-b9d3-288ec1be5887","Type":"ContainerStarted","Data":"7450adabf6132f86be4d33f4b5c22271fc661865c3fa66ab6b38cd2b13de8021"} Feb 02 13:05:42 crc kubenswrapper[4955]: I0202 13:05:42.293751 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2nfd" event={"ID":"5f1d75c2-d5d1-4281-b8cd-55f2e8f84261","Type":"ContainerStarted","Data":"d0d7aab98124c70ab5d473dc77dffbb4ad45d8ae28c10b59d790ce44fa4aec54"} Feb 02 13:05:42 crc kubenswrapper[4955]: I0202 13:05:42.295611 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5hd5" event={"ID":"9aa898a0-670e-4c3b-87c8-7d1d275fc6b5","Type":"ContainerStarted","Data":"8602b75e55f750170777a659c8d4eeac89bd15484cfe471692267a062cf89b17"} Feb 02 13:05:42 crc kubenswrapper[4955]: I0202 13:05:42.310888 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-82zvz" podStartSLOduration=2.401270335 podStartE2EDuration="35.310874451s" podCreationTimestamp="2026-02-02 13:05:07 +0000 UTC" firstStartedPulling="2026-02-02 13:05:08.842317606 +0000 UTC m=+159.754654056" lastFinishedPulling="2026-02-02 13:05:41.751921722 +0000 UTC m=+192.664258172" observedRunningTime="2026-02-02 13:05:42.30827619 +0000 UTC m=+193.220612640" watchObservedRunningTime="2026-02-02 13:05:42.310874451 +0000 UTC m=+193.223210901" Feb 02 13:05:42 crc kubenswrapper[4955]: I0202 13:05:42.350170 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t5hd5" podStartSLOduration=2.48807259 podStartE2EDuration="35.350153741s" podCreationTimestamp="2026-02-02 13:05:07 +0000 UTC" firstStartedPulling="2026-02-02 13:05:08.83232503 +0000 UTC m=+159.744661480" lastFinishedPulling="2026-02-02 13:05:41.694406191 +0000 UTC m=+192.606742631" observedRunningTime="2026-02-02 13:05:42.330832733 +0000 UTC m=+193.243169203" watchObservedRunningTime="2026-02-02 13:05:42.350153741 +0000 UTC m=+193.262490191" Feb 02 13:05:42 crc kubenswrapper[4955]: I0202 13:05:42.367925 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rvfwg" podStartSLOduration=3.41673583 podStartE2EDuration="36.367907751s" podCreationTimestamp="2026-02-02 13:05:06 +0000 UTC" firstStartedPulling="2026-02-02 13:05:08.845376368 +0000 UTC m=+159.757712818" lastFinishedPulling="2026-02-02 13:05:41.796548289 +0000 UTC m=+192.708884739" observedRunningTime="2026-02-02 13:05:42.366643131 +0000 UTC m=+193.278979581" watchObservedRunningTime="2026-02-02 13:05:42.367907751 +0000 UTC m=+193.280244211" Feb 02 13:05:42 crc kubenswrapper[4955]: I0202 13:05:42.370154 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k2nfd" podStartSLOduration=2.417279304 podStartE2EDuration="35.370144454s" podCreationTimestamp="2026-02-02 13:05:07 +0000 UTC" firstStartedPulling="2026-02-02 13:05:08.847466748 +0000 UTC m=+159.759803208" lastFinishedPulling="2026-02-02 13:05:41.800331908 +0000 UTC m=+192.712668358" observedRunningTime="2026-02-02 13:05:42.349022374 +0000 UTC m=+193.261358824" watchObservedRunningTime="2026-02-02 13:05:42.370144454 +0000 UTC m=+193.282480904" Feb 02 13:05:42 crc kubenswrapper[4955]: I0202 13:05:42.391212 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qvvd2" podStartSLOduration=2.606568386 podStartE2EDuration="32.391195332s" podCreationTimestamp="2026-02-02 13:05:10 +0000 UTC" firstStartedPulling="2026-02-02 13:05:11.94883579 +0000 UTC m=+162.861172240" lastFinishedPulling="2026-02-02 13:05:41.733462736 +0000 UTC m=+192.645799186" observedRunningTime="2026-02-02 13:05:42.388310424 +0000 UTC m=+193.300646884" watchObservedRunningTime="2026-02-02 13:05:42.391195332 +0000 UTC m=+193.303531782" Feb 02 13:05:43 crc kubenswrapper[4955]: I0202 13:05:43.118853 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dmwz5"] Feb 02 13:05:47 crc kubenswrapper[4955]: I0202 13:05:47.327628 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rvfwg" Feb 02 13:05:47 crc kubenswrapper[4955]: I0202 13:05:47.327951 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rvfwg" Feb 02 13:05:47 crc kubenswrapper[4955]: I0202 13:05:47.541739 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t5hd5" Feb 02 13:05:47 crc kubenswrapper[4955]: I0202 13:05:47.542020 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t5hd5" Feb 02 13:05:47 crc kubenswrapper[4955]: I0202 13:05:47.560901 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rvfwg" Feb 02 13:05:47 crc kubenswrapper[4955]: I0202 13:05:47.582841 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t5hd5" Feb 02 13:05:47 crc kubenswrapper[4955]: I0202 13:05:47.747206 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k2nfd" Feb 02 13:05:47 crc kubenswrapper[4955]: I0202 13:05:47.747255 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k2nfd" Feb 02 13:05:47 crc kubenswrapper[4955]: I0202 13:05:47.787297 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k2nfd" Feb 02 13:05:47 crc kubenswrapper[4955]: I0202 13:05:47.940717 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-82zvz" Feb 02 13:05:47 crc kubenswrapper[4955]: I0202 13:05:47.940858 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-82zvz" Feb 02 13:05:47 crc kubenswrapper[4955]: I0202 13:05:47.975636 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-82zvz" Feb 02 13:05:48 crc kubenswrapper[4955]: I0202 13:05:48.381694 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k2nfd" Feb 02 13:05:48 crc kubenswrapper[4955]: I0202 13:05:48.388393 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t5hd5" Feb 02 13:05:48 crc kubenswrapper[4955]: I0202 13:05:48.392916 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-82zvz" Feb 02 13:05:48 crc kubenswrapper[4955]: I0202 13:05:48.402469 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rvfwg" Feb 02 13:05:49 crc kubenswrapper[4955]: I0202 13:05:49.725853 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 13:05:49 crc kubenswrapper[4955]: E0202 13:05:49.726038 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d22e89ff-98eb-4f45-a14c-4a59c5eb72b2" containerName="pruner" Feb 02 13:05:49 crc kubenswrapper[4955]: I0202 13:05:49.726050 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="d22e89ff-98eb-4f45-a14c-4a59c5eb72b2" containerName="pruner" Feb 02 13:05:49 crc kubenswrapper[4955]: E0202 13:05:49.726067 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c05e27a9-a05d-40f8-845a-5ff0909c562a" containerName="pruner" Feb 02 13:05:49 crc kubenswrapper[4955]: I0202 13:05:49.726073 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="c05e27a9-a05d-40f8-845a-5ff0909c562a" containerName="pruner" Feb 02 13:05:49 crc kubenswrapper[4955]: I0202 13:05:49.726178 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="d22e89ff-98eb-4f45-a14c-4a59c5eb72b2" containerName="pruner" Feb 02 13:05:49 crc kubenswrapper[4955]: I0202 13:05:49.726191 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="c05e27a9-a05d-40f8-845a-5ff0909c562a" containerName="pruner" Feb 02 13:05:49 crc kubenswrapper[4955]: I0202 13:05:49.726536 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 13:05:49 crc kubenswrapper[4955]: I0202 13:05:49.728070 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 13:05:49 crc kubenswrapper[4955]: I0202 13:05:49.728240 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 13:05:49 crc kubenswrapper[4955]: I0202 13:05:49.731011 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 13:05:49 crc kubenswrapper[4955]: I0202 13:05:49.787655 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k2nfd"] Feb 02 13:05:49 crc kubenswrapper[4955]: I0202 13:05:49.853202 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f67e877-b695-49c6-b42b-56faecbd5b0b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9f67e877-b695-49c6-b42b-56faecbd5b0b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 13:05:49 crc kubenswrapper[4955]: I0202 13:05:49.853242 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f67e877-b695-49c6-b42b-56faecbd5b0b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9f67e877-b695-49c6-b42b-56faecbd5b0b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 13:05:49 crc kubenswrapper[4955]: I0202 13:05:49.954775 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f67e877-b695-49c6-b42b-56faecbd5b0b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9f67e877-b695-49c6-b42b-56faecbd5b0b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 13:05:49 crc kubenswrapper[4955]: I0202 13:05:49.955132 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f67e877-b695-49c6-b42b-56faecbd5b0b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9f67e877-b695-49c6-b42b-56faecbd5b0b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 13:05:49 crc kubenswrapper[4955]: I0202 13:05:49.955230 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f67e877-b695-49c6-b42b-56faecbd5b0b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9f67e877-b695-49c6-b42b-56faecbd5b0b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 13:05:49 crc kubenswrapper[4955]: I0202 13:05:49.976198 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f67e877-b695-49c6-b42b-56faecbd5b0b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9f67e877-b695-49c6-b42b-56faecbd5b0b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 13:05:49 crc kubenswrapper[4955]: I0202 13:05:49.991012 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-82zvz"] Feb 02 13:05:50 crc kubenswrapper[4955]: I0202 13:05:50.043460 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 13:05:50 crc kubenswrapper[4955]: I0202 13:05:50.423901 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 13:05:50 crc kubenswrapper[4955]: I0202 13:05:50.942231 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qvvd2" Feb 02 13:05:50 crc kubenswrapper[4955]: I0202 13:05:50.942573 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qvvd2" Feb 02 13:05:50 crc kubenswrapper[4955]: I0202 13:05:50.985764 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qvvd2" Feb 02 13:05:51 crc kubenswrapper[4955]: I0202 13:05:51.357600 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9f67e877-b695-49c6-b42b-56faecbd5b0b","Type":"ContainerStarted","Data":"813bb796706d807297c5238032a01107db0d8d9b7fe7103299bcc82221cd6986"} Feb 02 13:05:51 crc kubenswrapper[4955]: I0202 13:05:51.357635 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9f67e877-b695-49c6-b42b-56faecbd5b0b","Type":"ContainerStarted","Data":"eb01d7b2b1a4800f9543f0feda62fc314ab9c05e56edafe6dcdeb3fcc54e7743"} Feb 02 13:05:51 crc kubenswrapper[4955]: I0202 13:05:51.357747 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k2nfd" podUID="5f1d75c2-d5d1-4281-b8cd-55f2e8f84261" containerName="registry-server" containerID="cri-o://d0d7aab98124c70ab5d473dc77dffbb4ad45d8ae28c10b59d790ce44fa4aec54" gracePeriod=2 Feb 02 13:05:51 crc kubenswrapper[4955]: I0202 13:05:51.358036 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-82zvz" podUID="8954d9ba-b759-4256-834b-e781be220107" containerName="registry-server" containerID="cri-o://115ee7997b03925b4e47186e7d64baf9e8fe70c6824743473a62686d5d38dfef" gracePeriod=2 Feb 02 13:05:51 crc kubenswrapper[4955]: I0202 13:05:51.423993 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qvvd2" Feb 02 13:05:51 crc kubenswrapper[4955]: I0202 13:05:51.443547 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.443527574 podStartE2EDuration="2.443527574s" podCreationTimestamp="2026-02-02 13:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:51.375077446 +0000 UTC m=+202.287413916" watchObservedRunningTime="2026-02-02 13:05:51.443527574 +0000 UTC m=+202.355864024" Feb 02 13:05:51 crc kubenswrapper[4955]: I0202 13:05:51.854820 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k2nfd" Feb 02 13:05:51 crc kubenswrapper[4955]: I0202 13:05:51.859986 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-82zvz" Feb 02 13:05:51 crc kubenswrapper[4955]: I0202 13:05:51.979639 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f1d75c2-d5d1-4281-b8cd-55f2e8f84261-utilities\") pod \"5f1d75c2-d5d1-4281-b8cd-55f2e8f84261\" (UID: \"5f1d75c2-d5d1-4281-b8cd-55f2e8f84261\") " Feb 02 13:05:51 crc kubenswrapper[4955]: I0202 13:05:51.979699 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f1d75c2-d5d1-4281-b8cd-55f2e8f84261-catalog-content\") pod \"5f1d75c2-d5d1-4281-b8cd-55f2e8f84261\" (UID: \"5f1d75c2-d5d1-4281-b8cd-55f2e8f84261\") " Feb 02 13:05:51 crc kubenswrapper[4955]: I0202 13:05:51.980688 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8954d9ba-b759-4256-834b-e781be220107-catalog-content\") pod \"8954d9ba-b759-4256-834b-e781be220107\" (UID: \"8954d9ba-b759-4256-834b-e781be220107\") " Feb 02 13:05:51 crc kubenswrapper[4955]: I0202 13:05:51.980734 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqp6d\" (UniqueName: \"kubernetes.io/projected/5f1d75c2-d5d1-4281-b8cd-55f2e8f84261-kube-api-access-tqp6d\") pod \"5f1d75c2-d5d1-4281-b8cd-55f2e8f84261\" (UID: \"5f1d75c2-d5d1-4281-b8cd-55f2e8f84261\") " Feb 02 13:05:51 crc kubenswrapper[4955]: I0202 13:05:51.980786 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8vln\" (UniqueName: \"kubernetes.io/projected/8954d9ba-b759-4256-834b-e781be220107-kube-api-access-s8vln\") pod \"8954d9ba-b759-4256-834b-e781be220107\" (UID: \"8954d9ba-b759-4256-834b-e781be220107\") " Feb 02 13:05:51 crc kubenswrapper[4955]: I0202 13:05:51.980865 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8954d9ba-b759-4256-834b-e781be220107-utilities\") pod \"8954d9ba-b759-4256-834b-e781be220107\" (UID: \"8954d9ba-b759-4256-834b-e781be220107\") " Feb 02 13:05:51 crc kubenswrapper[4955]: I0202 13:05:51.980357 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f1d75c2-d5d1-4281-b8cd-55f2e8f84261-utilities" (OuterVolumeSpecName: "utilities") pod "5f1d75c2-d5d1-4281-b8cd-55f2e8f84261" (UID: "5f1d75c2-d5d1-4281-b8cd-55f2e8f84261"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:05:51 crc kubenswrapper[4955]: I0202 13:05:51.983947 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8954d9ba-b759-4256-834b-e781be220107-utilities" (OuterVolumeSpecName: "utilities") pod "8954d9ba-b759-4256-834b-e781be220107" (UID: "8954d9ba-b759-4256-834b-e781be220107"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:05:51 crc kubenswrapper[4955]: I0202 13:05:51.987291 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8954d9ba-b759-4256-834b-e781be220107-kube-api-access-s8vln" (OuterVolumeSpecName: "kube-api-access-s8vln") pod "8954d9ba-b759-4256-834b-e781be220107" (UID: "8954d9ba-b759-4256-834b-e781be220107"). InnerVolumeSpecName "kube-api-access-s8vln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:05:51 crc kubenswrapper[4955]: I0202 13:05:51.989724 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f1d75c2-d5d1-4281-b8cd-55f2e8f84261-kube-api-access-tqp6d" (OuterVolumeSpecName: "kube-api-access-tqp6d") pod "5f1d75c2-d5d1-4281-b8cd-55f2e8f84261" (UID: "5f1d75c2-d5d1-4281-b8cd-55f2e8f84261"). InnerVolumeSpecName "kube-api-access-tqp6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.082965 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqp6d\" (UniqueName: \"kubernetes.io/projected/5f1d75c2-d5d1-4281-b8cd-55f2e8f84261-kube-api-access-tqp6d\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.083000 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8vln\" (UniqueName: \"kubernetes.io/projected/8954d9ba-b759-4256-834b-e781be220107-kube-api-access-s8vln\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.083012 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8954d9ba-b759-4256-834b-e781be220107-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.083024 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f1d75c2-d5d1-4281-b8cd-55f2e8f84261-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.151764 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8954d9ba-b759-4256-834b-e781be220107-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8954d9ba-b759-4256-834b-e781be220107" (UID: "8954d9ba-b759-4256-834b-e781be220107"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.183962 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8954d9ba-b759-4256-834b-e781be220107-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.256009 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f1d75c2-d5d1-4281-b8cd-55f2e8f84261-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f1d75c2-d5d1-4281-b8cd-55f2e8f84261" (UID: "5f1d75c2-d5d1-4281-b8cd-55f2e8f84261"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.284880 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f1d75c2-d5d1-4281-b8cd-55f2e8f84261-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.364650 4955 generic.go:334] "Generic (PLEG): container finished" podID="8954d9ba-b759-4256-834b-e781be220107" containerID="115ee7997b03925b4e47186e7d64baf9e8fe70c6824743473a62686d5d38dfef" exitCode=0 Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.364698 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-82zvz" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.364753 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-82zvz" event={"ID":"8954d9ba-b759-4256-834b-e781be220107","Type":"ContainerDied","Data":"115ee7997b03925b4e47186e7d64baf9e8fe70c6824743473a62686d5d38dfef"} Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.364812 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-82zvz" event={"ID":"8954d9ba-b759-4256-834b-e781be220107","Type":"ContainerDied","Data":"1b3de0f7d53e78d9d40b24264bdf9f06bd5e951eb64b87e9ea4a975553780a23"} Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.364835 4955 scope.go:117] "RemoveContainer" containerID="115ee7997b03925b4e47186e7d64baf9e8fe70c6824743473a62686d5d38dfef" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.366196 4955 generic.go:334] "Generic (PLEG): container finished" podID="9f67e877-b695-49c6-b42b-56faecbd5b0b" containerID="813bb796706d807297c5238032a01107db0d8d9b7fe7103299bcc82221cd6986" exitCode=0 Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.366249 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9f67e877-b695-49c6-b42b-56faecbd5b0b","Type":"ContainerDied","Data":"813bb796706d807297c5238032a01107db0d8d9b7fe7103299bcc82221cd6986"} Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.369260 4955 generic.go:334] "Generic (PLEG): container finished" podID="5f1d75c2-d5d1-4281-b8cd-55f2e8f84261" containerID="d0d7aab98124c70ab5d473dc77dffbb4ad45d8ae28c10b59d790ce44fa4aec54" exitCode=0 Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.369300 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k2nfd" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.369325 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2nfd" event={"ID":"5f1d75c2-d5d1-4281-b8cd-55f2e8f84261","Type":"ContainerDied","Data":"d0d7aab98124c70ab5d473dc77dffbb4ad45d8ae28c10b59d790ce44fa4aec54"} Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.369345 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2nfd" event={"ID":"5f1d75c2-d5d1-4281-b8cd-55f2e8f84261","Type":"ContainerDied","Data":"de885ee71a272902fe1ce99d3d47bd6d7ccee347624b1b2d2db909c7de439699"} Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.387323 4955 scope.go:117] "RemoveContainer" containerID="c8658d33d980cccb449b74b16b2260b14cd54bb9af61db2a91f97856203e7103" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.409045 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-82zvz"] Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.419625 4955 scope.go:117] "RemoveContainer" containerID="6d178276a3386085b9c250a8685bd665b374c45718aa885bbfa2b62deb72334c" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.419633 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-82zvz"] Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.422112 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k2nfd"] Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.425166 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k2nfd"] Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.433047 4955 scope.go:117] "RemoveContainer" containerID="115ee7997b03925b4e47186e7d64baf9e8fe70c6824743473a62686d5d38dfef" Feb 02 13:05:52 crc kubenswrapper[4955]: E0202 13:05:52.433501 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"115ee7997b03925b4e47186e7d64baf9e8fe70c6824743473a62686d5d38dfef\": container with ID starting with 115ee7997b03925b4e47186e7d64baf9e8fe70c6824743473a62686d5d38dfef not found: ID does not exist" containerID="115ee7997b03925b4e47186e7d64baf9e8fe70c6824743473a62686d5d38dfef" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.433532 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"115ee7997b03925b4e47186e7d64baf9e8fe70c6824743473a62686d5d38dfef"} err="failed to get container status \"115ee7997b03925b4e47186e7d64baf9e8fe70c6824743473a62686d5d38dfef\": rpc error: code = NotFound desc = could not find container \"115ee7997b03925b4e47186e7d64baf9e8fe70c6824743473a62686d5d38dfef\": container with ID starting with 115ee7997b03925b4e47186e7d64baf9e8fe70c6824743473a62686d5d38dfef not found: ID does not exist" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.433591 4955 scope.go:117] "RemoveContainer" containerID="c8658d33d980cccb449b74b16b2260b14cd54bb9af61db2a91f97856203e7103" Feb 02 13:05:52 crc kubenswrapper[4955]: E0202 13:05:52.433877 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8658d33d980cccb449b74b16b2260b14cd54bb9af61db2a91f97856203e7103\": container with ID starting with c8658d33d980cccb449b74b16b2260b14cd54bb9af61db2a91f97856203e7103 not found: ID does not exist" containerID="c8658d33d980cccb449b74b16b2260b14cd54bb9af61db2a91f97856203e7103" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.433898 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8658d33d980cccb449b74b16b2260b14cd54bb9af61db2a91f97856203e7103"} err="failed to get container status \"c8658d33d980cccb449b74b16b2260b14cd54bb9af61db2a91f97856203e7103\": rpc error: code = NotFound desc = could not find container \"c8658d33d980cccb449b74b16b2260b14cd54bb9af61db2a91f97856203e7103\": container with ID starting with c8658d33d980cccb449b74b16b2260b14cd54bb9af61db2a91f97856203e7103 not found: ID does not exist" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.433913 4955 scope.go:117] "RemoveContainer" containerID="6d178276a3386085b9c250a8685bd665b374c45718aa885bbfa2b62deb72334c" Feb 02 13:05:52 crc kubenswrapper[4955]: E0202 13:05:52.434145 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d178276a3386085b9c250a8685bd665b374c45718aa885bbfa2b62deb72334c\": container with ID starting with 6d178276a3386085b9c250a8685bd665b374c45718aa885bbfa2b62deb72334c not found: ID does not exist" containerID="6d178276a3386085b9c250a8685bd665b374c45718aa885bbfa2b62deb72334c" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.434163 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d178276a3386085b9c250a8685bd665b374c45718aa885bbfa2b62deb72334c"} err="failed to get container status \"6d178276a3386085b9c250a8685bd665b374c45718aa885bbfa2b62deb72334c\": rpc error: code = NotFound desc = could not find container \"6d178276a3386085b9c250a8685bd665b374c45718aa885bbfa2b62deb72334c\": container with ID starting with 6d178276a3386085b9c250a8685bd665b374c45718aa885bbfa2b62deb72334c not found: ID does not exist" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.434177 4955 scope.go:117] "RemoveContainer" containerID="d0d7aab98124c70ab5d473dc77dffbb4ad45d8ae28c10b59d790ce44fa4aec54" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.444529 4955 scope.go:117] "RemoveContainer" containerID="99728e885b71e834dc8daf73561838ad3d54dff8472fa3806549709c5a52415c" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.456973 4955 scope.go:117] "RemoveContainer" containerID="a07a0ce5314022b093ef737efd14de4890e1e5db390d4a56de5cb866f02fefc7" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.468210 4955 scope.go:117] "RemoveContainer" containerID="d0d7aab98124c70ab5d473dc77dffbb4ad45d8ae28c10b59d790ce44fa4aec54" Feb 02 13:05:52 crc kubenswrapper[4955]: E0202 13:05:52.468451 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0d7aab98124c70ab5d473dc77dffbb4ad45d8ae28c10b59d790ce44fa4aec54\": container with ID starting with d0d7aab98124c70ab5d473dc77dffbb4ad45d8ae28c10b59d790ce44fa4aec54 not found: ID does not exist" containerID="d0d7aab98124c70ab5d473dc77dffbb4ad45d8ae28c10b59d790ce44fa4aec54" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.468481 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0d7aab98124c70ab5d473dc77dffbb4ad45d8ae28c10b59d790ce44fa4aec54"} err="failed to get container status \"d0d7aab98124c70ab5d473dc77dffbb4ad45d8ae28c10b59d790ce44fa4aec54\": rpc error: code = NotFound desc = could not find container \"d0d7aab98124c70ab5d473dc77dffbb4ad45d8ae28c10b59d790ce44fa4aec54\": container with ID starting with d0d7aab98124c70ab5d473dc77dffbb4ad45d8ae28c10b59d790ce44fa4aec54 not found: ID does not exist" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.468505 4955 scope.go:117] "RemoveContainer" containerID="99728e885b71e834dc8daf73561838ad3d54dff8472fa3806549709c5a52415c" Feb 02 13:05:52 crc kubenswrapper[4955]: E0202 13:05:52.468844 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99728e885b71e834dc8daf73561838ad3d54dff8472fa3806549709c5a52415c\": container with ID starting with 99728e885b71e834dc8daf73561838ad3d54dff8472fa3806549709c5a52415c not found: ID does not exist" containerID="99728e885b71e834dc8daf73561838ad3d54dff8472fa3806549709c5a52415c" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.468866 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99728e885b71e834dc8daf73561838ad3d54dff8472fa3806549709c5a52415c"} err="failed to get container status \"99728e885b71e834dc8daf73561838ad3d54dff8472fa3806549709c5a52415c\": rpc error: code = NotFound desc = could not find container \"99728e885b71e834dc8daf73561838ad3d54dff8472fa3806549709c5a52415c\": container with ID starting with 99728e885b71e834dc8daf73561838ad3d54dff8472fa3806549709c5a52415c not found: ID does not exist" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.468882 4955 scope.go:117] "RemoveContainer" containerID="a07a0ce5314022b093ef737efd14de4890e1e5db390d4a56de5cb866f02fefc7" Feb 02 13:05:52 crc kubenswrapper[4955]: E0202 13:05:52.469065 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a07a0ce5314022b093ef737efd14de4890e1e5db390d4a56de5cb866f02fefc7\": container with ID starting with a07a0ce5314022b093ef737efd14de4890e1e5db390d4a56de5cb866f02fefc7 not found: ID does not exist" containerID="a07a0ce5314022b093ef737efd14de4890e1e5db390d4a56de5cb866f02fefc7" Feb 02 13:05:52 crc kubenswrapper[4955]: I0202 13:05:52.469090 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a07a0ce5314022b093ef737efd14de4890e1e5db390d4a56de5cb866f02fefc7"} err="failed to get container status \"a07a0ce5314022b093ef737efd14de4890e1e5db390d4a56de5cb866f02fefc7\": rpc error: code = NotFound desc = could not find container \"a07a0ce5314022b093ef737efd14de4890e1e5db390d4a56de5cb866f02fefc7\": container with ID starting with a07a0ce5314022b093ef737efd14de4890e1e5db390d4a56de5cb866f02fefc7 not found: ID does not exist" Feb 02 13:05:53 crc kubenswrapper[4955]: I0202 13:05:53.706949 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 13:05:53 crc kubenswrapper[4955]: I0202 13:05:53.730804 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f1d75c2-d5d1-4281-b8cd-55f2e8f84261" path="/var/lib/kubelet/pods/5f1d75c2-d5d1-4281-b8cd-55f2e8f84261/volumes" Feb 02 13:05:53 crc kubenswrapper[4955]: I0202 13:05:53.731413 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8954d9ba-b759-4256-834b-e781be220107" path="/var/lib/kubelet/pods/8954d9ba-b759-4256-834b-e781be220107/volumes" Feb 02 13:05:53 crc kubenswrapper[4955]: I0202 13:05:53.802960 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f67e877-b695-49c6-b42b-56faecbd5b0b-kube-api-access\") pod \"9f67e877-b695-49c6-b42b-56faecbd5b0b\" (UID: \"9f67e877-b695-49c6-b42b-56faecbd5b0b\") " Feb 02 13:05:53 crc kubenswrapper[4955]: I0202 13:05:53.803087 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f67e877-b695-49c6-b42b-56faecbd5b0b-kubelet-dir\") pod \"9f67e877-b695-49c6-b42b-56faecbd5b0b\" (UID: \"9f67e877-b695-49c6-b42b-56faecbd5b0b\") " Feb 02 13:05:53 crc kubenswrapper[4955]: I0202 13:05:53.803220 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f67e877-b695-49c6-b42b-56faecbd5b0b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9f67e877-b695-49c6-b42b-56faecbd5b0b" (UID: "9f67e877-b695-49c6-b42b-56faecbd5b0b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:05:53 crc kubenswrapper[4955]: I0202 13:05:53.803358 4955 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f67e877-b695-49c6-b42b-56faecbd5b0b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:53 crc kubenswrapper[4955]: I0202 13:05:53.807591 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f67e877-b695-49c6-b42b-56faecbd5b0b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9f67e877-b695-49c6-b42b-56faecbd5b0b" (UID: "9f67e877-b695-49c6-b42b-56faecbd5b0b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:05:53 crc kubenswrapper[4955]: I0202 13:05:53.904392 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f67e877-b695-49c6-b42b-56faecbd5b0b-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:54 crc kubenswrapper[4955]: I0202 13:05:54.383618 4955 generic.go:334] "Generic (PLEG): container finished" podID="86b11a5e-9d92-4f01-899d-51f7f9a2bbce" containerID="bd9a673443d64897951d99b79e6699c1271d678d2fadf3f045336ade696a9838" exitCode=0 Feb 02 13:05:54 crc kubenswrapper[4955]: I0202 13:05:54.383812 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btrhx" event={"ID":"86b11a5e-9d92-4f01-899d-51f7f9a2bbce","Type":"ContainerDied","Data":"bd9a673443d64897951d99b79e6699c1271d678d2fadf3f045336ade696a9838"} Feb 02 13:05:54 crc kubenswrapper[4955]: I0202 13:05:54.391772 4955 generic.go:334] "Generic (PLEG): container finished" podID="b349345a-3983-4555-8ad2-eb5808c83668" containerID="d303008cebc224f7b1f4a9d6f82ecf6e4da40f08eb9773272aced54ce069d064" exitCode=0 Feb 02 13:05:54 crc kubenswrapper[4955]: I0202 13:05:54.391847 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8pj9k" event={"ID":"b349345a-3983-4555-8ad2-eb5808c83668","Type":"ContainerDied","Data":"d303008cebc224f7b1f4a9d6f82ecf6e4da40f08eb9773272aced54ce069d064"} Feb 02 13:05:54 crc kubenswrapper[4955]: I0202 13:05:54.393676 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9f67e877-b695-49c6-b42b-56faecbd5b0b","Type":"ContainerDied","Data":"eb01d7b2b1a4800f9543f0feda62fc314ab9c05e56edafe6dcdeb3fcc54e7743"} Feb 02 13:05:54 crc kubenswrapper[4955]: I0202 13:05:54.393715 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb01d7b2b1a4800f9543f0feda62fc314ab9c05e56edafe6dcdeb3fcc54e7743" Feb 02 13:05:54 crc kubenswrapper[4955]: I0202 13:05:54.393760 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 13:05:54 crc kubenswrapper[4955]: I0202 13:05:54.397154 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qvvd2"] Feb 02 13:05:54 crc kubenswrapper[4955]: I0202 13:05:54.397426 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qvvd2" podUID="77d0ea22-5c7c-49a5-b9d3-288ec1be5887" containerName="registry-server" containerID="cri-o://7450adabf6132f86be4d33f4b5c22271fc661865c3fa66ab6b38cd2b13de8021" gracePeriod=2 Feb 02 13:05:54 crc kubenswrapper[4955]: I0202 13:05:54.822478 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvvd2" Feb 02 13:05:54 crc kubenswrapper[4955]: I0202 13:05:54.915771 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77d0ea22-5c7c-49a5-b9d3-288ec1be5887-utilities\") pod \"77d0ea22-5c7c-49a5-b9d3-288ec1be5887\" (UID: \"77d0ea22-5c7c-49a5-b9d3-288ec1be5887\") " Feb 02 13:05:54 crc kubenswrapper[4955]: I0202 13:05:54.916163 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77d0ea22-5c7c-49a5-b9d3-288ec1be5887-catalog-content\") pod \"77d0ea22-5c7c-49a5-b9d3-288ec1be5887\" (UID: \"77d0ea22-5c7c-49a5-b9d3-288ec1be5887\") " Feb 02 13:05:54 crc kubenswrapper[4955]: I0202 13:05:54.916241 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7vzx\" (UniqueName: \"kubernetes.io/projected/77d0ea22-5c7c-49a5-b9d3-288ec1be5887-kube-api-access-w7vzx\") pod \"77d0ea22-5c7c-49a5-b9d3-288ec1be5887\" (UID: \"77d0ea22-5c7c-49a5-b9d3-288ec1be5887\") " Feb 02 13:05:54 crc kubenswrapper[4955]: I0202 13:05:54.918190 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77d0ea22-5c7c-49a5-b9d3-288ec1be5887-utilities" (OuterVolumeSpecName: "utilities") pod "77d0ea22-5c7c-49a5-b9d3-288ec1be5887" (UID: "77d0ea22-5c7c-49a5-b9d3-288ec1be5887"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:05:54 crc kubenswrapper[4955]: I0202 13:05:54.927005 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77d0ea22-5c7c-49a5-b9d3-288ec1be5887-kube-api-access-w7vzx" (OuterVolumeSpecName: "kube-api-access-w7vzx") pod "77d0ea22-5c7c-49a5-b9d3-288ec1be5887" (UID: "77d0ea22-5c7c-49a5-b9d3-288ec1be5887"). InnerVolumeSpecName "kube-api-access-w7vzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.017961 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77d0ea22-5c7c-49a5-b9d3-288ec1be5887-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.017993 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7vzx\" (UniqueName: \"kubernetes.io/projected/77d0ea22-5c7c-49a5-b9d3-288ec1be5887-kube-api-access-w7vzx\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.040938 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77d0ea22-5c7c-49a5-b9d3-288ec1be5887-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77d0ea22-5c7c-49a5-b9d3-288ec1be5887" (UID: "77d0ea22-5c7c-49a5-b9d3-288ec1be5887"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.119362 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77d0ea22-5c7c-49a5-b9d3-288ec1be5887-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.401123 4955 generic.go:334] "Generic (PLEG): container finished" podID="77d0ea22-5c7c-49a5-b9d3-288ec1be5887" containerID="7450adabf6132f86be4d33f4b5c22271fc661865c3fa66ab6b38cd2b13de8021" exitCode=0 Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.401206 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvvd2" Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.401216 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvvd2" event={"ID":"77d0ea22-5c7c-49a5-b9d3-288ec1be5887","Type":"ContainerDied","Data":"7450adabf6132f86be4d33f4b5c22271fc661865c3fa66ab6b38cd2b13de8021"} Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.401290 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvvd2" event={"ID":"77d0ea22-5c7c-49a5-b9d3-288ec1be5887","Type":"ContainerDied","Data":"710fa8adb47195e4d44887ca6e8b2db39098a17b5e82b31e07b4aa00b2b2d869"} Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.401319 4955 scope.go:117] "RemoveContainer" containerID="7450adabf6132f86be4d33f4b5c22271fc661865c3fa66ab6b38cd2b13de8021" Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.403703 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8pj9k" event={"ID":"b349345a-3983-4555-8ad2-eb5808c83668","Type":"ContainerStarted","Data":"9a8e086e295882c481c5ef00efe3e988f263b3604ddf16f8c980d66e087867d1"} Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.405525 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btrhx" event={"ID":"86b11a5e-9d92-4f01-899d-51f7f9a2bbce","Type":"ContainerStarted","Data":"0816e22d69e4aa8eb4eee2ef52a4f8f476e8e6ac48c06d8a8dfa9715c55b4067"} Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.415035 4955 scope.go:117] "RemoveContainer" containerID="686969a4419e7790ec9f1e46a03641604f654759d97fdad7dd553243867b571e" Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.427634 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8pj9k" podStartSLOduration=3.302019235 podStartE2EDuration="46.427617144s" podCreationTimestamp="2026-02-02 13:05:09 +0000 UTC" firstStartedPulling="2026-02-02 13:05:11.955975929 +0000 UTC m=+162.868312369" lastFinishedPulling="2026-02-02 13:05:55.081573828 +0000 UTC m=+205.993910278" observedRunningTime="2026-02-02 13:05:55.424580696 +0000 UTC m=+206.336917146" watchObservedRunningTime="2026-02-02 13:05:55.427617144 +0000 UTC m=+206.339953594" Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.433502 4955 scope.go:117] "RemoveContainer" containerID="672c5b00fd008ce7fb7969104f321af8b89bffc75342cf50e9bff01a0fa6714b" Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.451442 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-btrhx" podStartSLOduration=2.54881819 podStartE2EDuration="46.451424755s" podCreationTimestamp="2026-02-02 13:05:09 +0000 UTC" firstStartedPulling="2026-02-02 13:05:10.903064529 +0000 UTC m=+161.815400979" lastFinishedPulling="2026-02-02 13:05:54.805671094 +0000 UTC m=+205.718007544" observedRunningTime="2026-02-02 13:05:55.4422685 +0000 UTC m=+206.354604970" watchObservedRunningTime="2026-02-02 13:05:55.451424755 +0000 UTC m=+206.363761205" Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.454724 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qvvd2"] Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.458973 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qvvd2"] Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.459083 4955 scope.go:117] "RemoveContainer" containerID="7450adabf6132f86be4d33f4b5c22271fc661865c3fa66ab6b38cd2b13de8021" Feb 02 13:05:55 crc kubenswrapper[4955]: E0202 13:05:55.459543 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7450adabf6132f86be4d33f4b5c22271fc661865c3fa66ab6b38cd2b13de8021\": container with ID starting with 7450adabf6132f86be4d33f4b5c22271fc661865c3fa66ab6b38cd2b13de8021 not found: ID does not exist" containerID="7450adabf6132f86be4d33f4b5c22271fc661865c3fa66ab6b38cd2b13de8021" Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.459607 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7450adabf6132f86be4d33f4b5c22271fc661865c3fa66ab6b38cd2b13de8021"} err="failed to get container status \"7450adabf6132f86be4d33f4b5c22271fc661865c3fa66ab6b38cd2b13de8021\": rpc error: code = NotFound desc = could not find container \"7450adabf6132f86be4d33f4b5c22271fc661865c3fa66ab6b38cd2b13de8021\": container with ID starting with 7450adabf6132f86be4d33f4b5c22271fc661865c3fa66ab6b38cd2b13de8021 not found: ID does not exist" Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.459637 4955 scope.go:117] "RemoveContainer" containerID="686969a4419e7790ec9f1e46a03641604f654759d97fdad7dd553243867b571e" Feb 02 13:05:55 crc kubenswrapper[4955]: E0202 13:05:55.460045 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"686969a4419e7790ec9f1e46a03641604f654759d97fdad7dd553243867b571e\": container with ID starting with 686969a4419e7790ec9f1e46a03641604f654759d97fdad7dd553243867b571e not found: ID does not exist" containerID="686969a4419e7790ec9f1e46a03641604f654759d97fdad7dd553243867b571e" Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.460064 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"686969a4419e7790ec9f1e46a03641604f654759d97fdad7dd553243867b571e"} err="failed to get container status \"686969a4419e7790ec9f1e46a03641604f654759d97fdad7dd553243867b571e\": rpc error: code = NotFound desc = could not find container \"686969a4419e7790ec9f1e46a03641604f654759d97fdad7dd553243867b571e\": container with ID starting with 686969a4419e7790ec9f1e46a03641604f654759d97fdad7dd553243867b571e not found: ID does not exist" Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.460080 4955 scope.go:117] "RemoveContainer" containerID="672c5b00fd008ce7fb7969104f321af8b89bffc75342cf50e9bff01a0fa6714b" Feb 02 13:05:55 crc kubenswrapper[4955]: E0202 13:05:55.460421 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"672c5b00fd008ce7fb7969104f321af8b89bffc75342cf50e9bff01a0fa6714b\": container with ID starting with 672c5b00fd008ce7fb7969104f321af8b89bffc75342cf50e9bff01a0fa6714b not found: ID does not exist" containerID="672c5b00fd008ce7fb7969104f321af8b89bffc75342cf50e9bff01a0fa6714b" Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.460452 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"672c5b00fd008ce7fb7969104f321af8b89bffc75342cf50e9bff01a0fa6714b"} err="failed to get container status \"672c5b00fd008ce7fb7969104f321af8b89bffc75342cf50e9bff01a0fa6714b\": rpc error: code = NotFound desc = could not find container \"672c5b00fd008ce7fb7969104f321af8b89bffc75342cf50e9bff01a0fa6714b\": container with ID starting with 672c5b00fd008ce7fb7969104f321af8b89bffc75342cf50e9bff01a0fa6714b not found: ID does not exist" Feb 02 13:05:55 crc kubenswrapper[4955]: I0202 13:05:55.755114 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77d0ea22-5c7c-49a5-b9d3-288ec1be5887" path="/var/lib/kubelet/pods/77d0ea22-5c7c-49a5-b9d3-288ec1be5887/volumes" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.718854 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 13:05:56 crc kubenswrapper[4955]: E0202 13:05:56.719321 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8954d9ba-b759-4256-834b-e781be220107" containerName="registry-server" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.719333 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="8954d9ba-b759-4256-834b-e781be220107" containerName="registry-server" Feb 02 13:05:56 crc kubenswrapper[4955]: E0202 13:05:56.719342 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77d0ea22-5c7c-49a5-b9d3-288ec1be5887" containerName="extract-content" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.719348 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="77d0ea22-5c7c-49a5-b9d3-288ec1be5887" containerName="extract-content" Feb 02 13:05:56 crc kubenswrapper[4955]: E0202 13:05:56.719365 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1d75c2-d5d1-4281-b8cd-55f2e8f84261" containerName="extract-content" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.719371 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1d75c2-d5d1-4281-b8cd-55f2e8f84261" containerName="extract-content" Feb 02 13:05:56 crc kubenswrapper[4955]: E0202 13:05:56.719380 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f67e877-b695-49c6-b42b-56faecbd5b0b" containerName="pruner" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.719385 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f67e877-b695-49c6-b42b-56faecbd5b0b" containerName="pruner" Feb 02 13:05:56 crc kubenswrapper[4955]: E0202 13:05:56.719393 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77d0ea22-5c7c-49a5-b9d3-288ec1be5887" containerName="extract-utilities" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.719399 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="77d0ea22-5c7c-49a5-b9d3-288ec1be5887" containerName="extract-utilities" Feb 02 13:05:56 crc kubenswrapper[4955]: E0202 13:05:56.719406 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77d0ea22-5c7c-49a5-b9d3-288ec1be5887" containerName="registry-server" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.719412 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="77d0ea22-5c7c-49a5-b9d3-288ec1be5887" containerName="registry-server" Feb 02 13:05:56 crc kubenswrapper[4955]: E0202 13:05:56.719421 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1d75c2-d5d1-4281-b8cd-55f2e8f84261" containerName="extract-utilities" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.719426 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1d75c2-d5d1-4281-b8cd-55f2e8f84261" containerName="extract-utilities" Feb 02 13:05:56 crc kubenswrapper[4955]: E0202 13:05:56.719433 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1d75c2-d5d1-4281-b8cd-55f2e8f84261" containerName="registry-server" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.719440 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1d75c2-d5d1-4281-b8cd-55f2e8f84261" containerName="registry-server" Feb 02 13:05:56 crc kubenswrapper[4955]: E0202 13:05:56.719450 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8954d9ba-b759-4256-834b-e781be220107" containerName="extract-utilities" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.719456 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="8954d9ba-b759-4256-834b-e781be220107" containerName="extract-utilities" Feb 02 13:05:56 crc kubenswrapper[4955]: E0202 13:05:56.719467 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8954d9ba-b759-4256-834b-e781be220107" containerName="extract-content" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.719474 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="8954d9ba-b759-4256-834b-e781be220107" containerName="extract-content" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.719673 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1d75c2-d5d1-4281-b8cd-55f2e8f84261" containerName="registry-server" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.719692 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="8954d9ba-b759-4256-834b-e781be220107" containerName="registry-server" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.719703 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f67e877-b695-49c6-b42b-56faecbd5b0b" containerName="pruner" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.719710 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="77d0ea22-5c7c-49a5-b9d3-288ec1be5887" containerName="registry-server" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.720063 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.722128 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.722204 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.729392 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.838460 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/087ff40a-30e1-4f8f-919f-1f7148cc69ed-kube-api-access\") pod \"installer-9-crc\" (UID: \"087ff40a-30e1-4f8f-919f-1f7148cc69ed\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.838531 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/087ff40a-30e1-4f8f-919f-1f7148cc69ed-kubelet-dir\") pod \"installer-9-crc\" (UID: \"087ff40a-30e1-4f8f-919f-1f7148cc69ed\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.838583 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/087ff40a-30e1-4f8f-919f-1f7148cc69ed-var-lock\") pod \"installer-9-crc\" (UID: \"087ff40a-30e1-4f8f-919f-1f7148cc69ed\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.940167 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/087ff40a-30e1-4f8f-919f-1f7148cc69ed-kube-api-access\") pod \"installer-9-crc\" (UID: \"087ff40a-30e1-4f8f-919f-1f7148cc69ed\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.940214 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/087ff40a-30e1-4f8f-919f-1f7148cc69ed-kubelet-dir\") pod \"installer-9-crc\" (UID: \"087ff40a-30e1-4f8f-919f-1f7148cc69ed\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.940240 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/087ff40a-30e1-4f8f-919f-1f7148cc69ed-var-lock\") pod \"installer-9-crc\" (UID: \"087ff40a-30e1-4f8f-919f-1f7148cc69ed\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.940324 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/087ff40a-30e1-4f8f-919f-1f7148cc69ed-var-lock\") pod \"installer-9-crc\" (UID: \"087ff40a-30e1-4f8f-919f-1f7148cc69ed\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.940366 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/087ff40a-30e1-4f8f-919f-1f7148cc69ed-kubelet-dir\") pod \"installer-9-crc\" (UID: \"087ff40a-30e1-4f8f-919f-1f7148cc69ed\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:05:56 crc kubenswrapper[4955]: I0202 13:05:56.960225 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/087ff40a-30e1-4f8f-919f-1f7148cc69ed-kube-api-access\") pod \"installer-9-crc\" (UID: \"087ff40a-30e1-4f8f-919f-1f7148cc69ed\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:05:57 crc kubenswrapper[4955]: I0202 13:05:57.035657 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:05:57 crc kubenswrapper[4955]: I0202 13:05:57.417485 4955 generic.go:334] "Generic (PLEG): container finished" podID="c603a0e0-e73c-4d68-b3f5-947d61505f43" containerID="cc5f17670665d1d64905f7b334edadf4f0f7a1a3b6b2dc1f12e268c9ce9202e6" exitCode=0 Feb 02 13:05:57 crc kubenswrapper[4955]: I0202 13:05:57.417571 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhpwq" event={"ID":"c603a0e0-e73c-4d68-b3f5-947d61505f43","Type":"ContainerDied","Data":"cc5f17670665d1d64905f7b334edadf4f0f7a1a3b6b2dc1f12e268c9ce9202e6"} Feb 02 13:05:57 crc kubenswrapper[4955]: I0202 13:05:57.480530 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 13:05:58 crc kubenswrapper[4955]: I0202 13:05:58.426925 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"087ff40a-30e1-4f8f-919f-1f7148cc69ed","Type":"ContainerStarted","Data":"93a7e02da7010bd6fa6c3d6070237568c239c58e43394204122b318bbc4d7a08"} Feb 02 13:05:58 crc kubenswrapper[4955]: I0202 13:05:58.428708 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"087ff40a-30e1-4f8f-919f-1f7148cc69ed","Type":"ContainerStarted","Data":"324e31abeae06cbe9be9a63024762223785885dfbd70605668291d91fe55ce42"} Feb 02 13:05:59 crc kubenswrapper[4955]: I0202 13:05:59.523814 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-btrhx" Feb 02 13:05:59 crc kubenswrapper[4955]: I0202 13:05:59.523865 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-btrhx" Feb 02 13:05:59 crc kubenswrapper[4955]: I0202 13:05:59.558533 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-btrhx" Feb 02 13:05:59 crc kubenswrapper[4955]: I0202 13:05:59.575717 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.575684525 podStartE2EDuration="3.575684525s" podCreationTimestamp="2026-02-02 13:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:58.442911759 +0000 UTC m=+209.355248209" watchObservedRunningTime="2026-02-02 13:05:59.575684525 +0000 UTC m=+210.488021045" Feb 02 13:05:59 crc kubenswrapper[4955]: I0202 13:05:59.922327 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8pj9k" Feb 02 13:05:59 crc kubenswrapper[4955]: I0202 13:05:59.923317 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8pj9k" Feb 02 13:05:59 crc kubenswrapper[4955]: I0202 13:05:59.960004 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8pj9k" Feb 02 13:06:00 crc kubenswrapper[4955]: I0202 13:06:00.438025 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhpwq" event={"ID":"c603a0e0-e73c-4d68-b3f5-947d61505f43","Type":"ContainerStarted","Data":"75e85fd20225cda15a8a8cfd26f1ed81f50e3cb2ae6645a65d9033066742eeeb"} Feb 02 13:06:00 crc kubenswrapper[4955]: I0202 13:06:00.454662 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fhpwq" podStartSLOduration=2.658024131 podStartE2EDuration="50.454643005s" podCreationTimestamp="2026-02-02 13:05:10 +0000 UTC" firstStartedPulling="2026-02-02 13:05:11.948333588 +0000 UTC m=+162.860670038" lastFinishedPulling="2026-02-02 13:05:59.744952462 +0000 UTC m=+210.657288912" observedRunningTime="2026-02-02 13:06:00.453044583 +0000 UTC m=+211.365381033" watchObservedRunningTime="2026-02-02 13:06:00.454643005 +0000 UTC m=+211.366979455" Feb 02 13:06:00 crc kubenswrapper[4955]: I0202 13:06:00.473752 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-btrhx" Feb 02 13:06:00 crc kubenswrapper[4955]: I0202 13:06:00.475264 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8pj9k" Feb 02 13:06:00 crc kubenswrapper[4955]: I0202 13:06:00.534657 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fhpwq" Feb 02 13:06:00 crc kubenswrapper[4955]: I0202 13:06:00.534790 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fhpwq" Feb 02 13:06:01 crc kubenswrapper[4955]: I0202 13:06:01.568314 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fhpwq" podUID="c603a0e0-e73c-4d68-b3f5-947d61505f43" containerName="registry-server" probeResult="failure" output=< Feb 02 13:06:01 crc kubenswrapper[4955]: timeout: failed to connect service ":50051" within 1s Feb 02 13:06:01 crc kubenswrapper[4955]: > Feb 02 13:06:02 crc kubenswrapper[4955]: I0202 13:06:02.591603 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8pj9k"] Feb 02 13:06:02 crc kubenswrapper[4955]: I0202 13:06:02.591918 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8pj9k" podUID="b349345a-3983-4555-8ad2-eb5808c83668" containerName="registry-server" containerID="cri-o://9a8e086e295882c481c5ef00efe3e988f263b3604ddf16f8c980d66e087867d1" gracePeriod=2 Feb 02 13:06:02 crc kubenswrapper[4955]: I0202 13:06:02.938241 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8pj9k" Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.016518 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.016617 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.016659 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.016896 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b349345a-3983-4555-8ad2-eb5808c83668-catalog-content\") pod \"b349345a-3983-4555-8ad2-eb5808c83668\" (UID: \"b349345a-3983-4555-8ad2-eb5808c83668\") " Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.017088 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d"} pod="openshift-machine-config-operator/machine-config-daemon-6l62h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.017157 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" containerID="cri-o://7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d" gracePeriod=600 Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.017252 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-756fq\" (UniqueName: \"kubernetes.io/projected/b349345a-3983-4555-8ad2-eb5808c83668-kube-api-access-756fq\") pod \"b349345a-3983-4555-8ad2-eb5808c83668\" (UID: \"b349345a-3983-4555-8ad2-eb5808c83668\") " Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.017352 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b349345a-3983-4555-8ad2-eb5808c83668-utilities\") pod \"b349345a-3983-4555-8ad2-eb5808c83668\" (UID: \"b349345a-3983-4555-8ad2-eb5808c83668\") " Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.020053 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b349345a-3983-4555-8ad2-eb5808c83668-utilities" (OuterVolumeSpecName: "utilities") pod "b349345a-3983-4555-8ad2-eb5808c83668" (UID: "b349345a-3983-4555-8ad2-eb5808c83668"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.027712 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b349345a-3983-4555-8ad2-eb5808c83668-kube-api-access-756fq" (OuterVolumeSpecName: "kube-api-access-756fq") pod "b349345a-3983-4555-8ad2-eb5808c83668" (UID: "b349345a-3983-4555-8ad2-eb5808c83668"). InnerVolumeSpecName "kube-api-access-756fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.041241 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b349345a-3983-4555-8ad2-eb5808c83668-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b349345a-3983-4555-8ad2-eb5808c83668" (UID: "b349345a-3983-4555-8ad2-eb5808c83668"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.118783 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b349345a-3983-4555-8ad2-eb5808c83668-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.118817 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b349345a-3983-4555-8ad2-eb5808c83668-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.118826 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-756fq\" (UniqueName: \"kubernetes.io/projected/b349345a-3983-4555-8ad2-eb5808c83668-kube-api-access-756fq\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.454892 4955 generic.go:334] "Generic (PLEG): container finished" podID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerID="7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d" exitCode=0 Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.454961 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerDied","Data":"7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d"} Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.455213 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerStarted","Data":"e2b066fe3d22716e67cada877eecd7854555a99c0cda44ff4824ac9dad20f74b"} Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.457409 4955 generic.go:334] "Generic (PLEG): container finished" podID="b349345a-3983-4555-8ad2-eb5808c83668" containerID="9a8e086e295882c481c5ef00efe3e988f263b3604ddf16f8c980d66e087867d1" exitCode=0 Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.457431 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8pj9k" event={"ID":"b349345a-3983-4555-8ad2-eb5808c83668","Type":"ContainerDied","Data":"9a8e086e295882c481c5ef00efe3e988f263b3604ddf16f8c980d66e087867d1"} Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.457458 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8pj9k" event={"ID":"b349345a-3983-4555-8ad2-eb5808c83668","Type":"ContainerDied","Data":"80027a853c041c035f7b4d76fb792f0e08cd0e6ff6fadcdf43d5e362cb2c0ca5"} Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.457475 4955 scope.go:117] "RemoveContainer" containerID="9a8e086e295882c481c5ef00efe3e988f263b3604ddf16f8c980d66e087867d1" Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.457489 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8pj9k" Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.476758 4955 scope.go:117] "RemoveContainer" containerID="d303008cebc224f7b1f4a9d6f82ecf6e4da40f08eb9773272aced54ce069d064" Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.494509 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8pj9k"] Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.497840 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8pj9k"] Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.502652 4955 scope.go:117] "RemoveContainer" containerID="c26fed79122916ee5c536e596e3770a17cf7cce75b70bbde598af8ff0301402d" Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.516783 4955 scope.go:117] "RemoveContainer" containerID="9a8e086e295882c481c5ef00efe3e988f263b3604ddf16f8c980d66e087867d1" Feb 02 13:06:03 crc kubenswrapper[4955]: E0202 13:06:03.517273 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a8e086e295882c481c5ef00efe3e988f263b3604ddf16f8c980d66e087867d1\": container with ID starting with 9a8e086e295882c481c5ef00efe3e988f263b3604ddf16f8c980d66e087867d1 not found: ID does not exist" containerID="9a8e086e295882c481c5ef00efe3e988f263b3604ddf16f8c980d66e087867d1" Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.517308 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a8e086e295882c481c5ef00efe3e988f263b3604ddf16f8c980d66e087867d1"} err="failed to get container status \"9a8e086e295882c481c5ef00efe3e988f263b3604ddf16f8c980d66e087867d1\": rpc error: code = NotFound desc = could not find container \"9a8e086e295882c481c5ef00efe3e988f263b3604ddf16f8c980d66e087867d1\": container with ID starting with 9a8e086e295882c481c5ef00efe3e988f263b3604ddf16f8c980d66e087867d1 not found: ID does not exist" Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.517335 4955 scope.go:117] "RemoveContainer" containerID="d303008cebc224f7b1f4a9d6f82ecf6e4da40f08eb9773272aced54ce069d064" Feb 02 13:06:03 crc kubenswrapper[4955]: E0202 13:06:03.517724 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d303008cebc224f7b1f4a9d6f82ecf6e4da40f08eb9773272aced54ce069d064\": container with ID starting with d303008cebc224f7b1f4a9d6f82ecf6e4da40f08eb9773272aced54ce069d064 not found: ID does not exist" containerID="d303008cebc224f7b1f4a9d6f82ecf6e4da40f08eb9773272aced54ce069d064" Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.517758 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d303008cebc224f7b1f4a9d6f82ecf6e4da40f08eb9773272aced54ce069d064"} err="failed to get container status \"d303008cebc224f7b1f4a9d6f82ecf6e4da40f08eb9773272aced54ce069d064\": rpc error: code = NotFound desc = could not find container \"d303008cebc224f7b1f4a9d6f82ecf6e4da40f08eb9773272aced54ce069d064\": container with ID starting with d303008cebc224f7b1f4a9d6f82ecf6e4da40f08eb9773272aced54ce069d064 not found: ID does not exist" Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.517780 4955 scope.go:117] "RemoveContainer" containerID="c26fed79122916ee5c536e596e3770a17cf7cce75b70bbde598af8ff0301402d" Feb 02 13:06:03 crc kubenswrapper[4955]: E0202 13:06:03.518196 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c26fed79122916ee5c536e596e3770a17cf7cce75b70bbde598af8ff0301402d\": container with ID starting with c26fed79122916ee5c536e596e3770a17cf7cce75b70bbde598af8ff0301402d not found: ID does not exist" containerID="c26fed79122916ee5c536e596e3770a17cf7cce75b70bbde598af8ff0301402d" Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.518310 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c26fed79122916ee5c536e596e3770a17cf7cce75b70bbde598af8ff0301402d"} err="failed to get container status \"c26fed79122916ee5c536e596e3770a17cf7cce75b70bbde598af8ff0301402d\": rpc error: code = NotFound desc = could not find container \"c26fed79122916ee5c536e596e3770a17cf7cce75b70bbde598af8ff0301402d\": container with ID starting with c26fed79122916ee5c536e596e3770a17cf7cce75b70bbde598af8ff0301402d not found: ID does not exist" Feb 02 13:06:03 crc kubenswrapper[4955]: I0202 13:06:03.728539 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b349345a-3983-4555-8ad2-eb5808c83668" path="/var/lib/kubelet/pods/b349345a-3983-4555-8ad2-eb5808c83668/volumes" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.156052 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" podUID="ddec21a9-43c9-4885-abde-9e65c9a8762d" containerName="oauth-openshift" containerID="cri-o://daa3b54a32e619bbe4ac4ab5fed8357c9e6e876192037a9dbd54b8ff10e95364" gracePeriod=15 Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.496537 4955 generic.go:334] "Generic (PLEG): container finished" podID="ddec21a9-43c9-4885-abde-9e65c9a8762d" containerID="daa3b54a32e619bbe4ac4ab5fed8357c9e6e876192037a9dbd54b8ff10e95364" exitCode=0 Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.496613 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" event={"ID":"ddec21a9-43c9-4885-abde-9e65c9a8762d","Type":"ContainerDied","Data":"daa3b54a32e619bbe4ac4ab5fed8357c9e6e876192037a9dbd54b8ff10e95364"} Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.537458 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.684523 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-session\") pod \"ddec21a9-43c9-4885-abde-9e65c9a8762d\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.684594 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-audit-policies\") pod \"ddec21a9-43c9-4885-abde-9e65c9a8762d\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.684730 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-ocp-branding-template\") pod \"ddec21a9-43c9-4885-abde-9e65c9a8762d\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.684759 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-cliconfig\") pod \"ddec21a9-43c9-4885-abde-9e65c9a8762d\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.684783 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-serving-cert\") pod \"ddec21a9-43c9-4885-abde-9e65c9a8762d\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.684858 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-template-provider-selection\") pod \"ddec21a9-43c9-4885-abde-9e65c9a8762d\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.684889 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-router-certs\") pod \"ddec21a9-43c9-4885-abde-9e65c9a8762d\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.684932 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-service-ca\") pod \"ddec21a9-43c9-4885-abde-9e65c9a8762d\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.684974 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-template-error\") pod \"ddec21a9-43c9-4885-abde-9e65c9a8762d\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.685034 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-idp-0-file-data\") pod \"ddec21a9-43c9-4885-abde-9e65c9a8762d\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.685064 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd6dh\" (UniqueName: \"kubernetes.io/projected/ddec21a9-43c9-4885-abde-9e65c9a8762d-kube-api-access-rd6dh\") pod \"ddec21a9-43c9-4885-abde-9e65c9a8762d\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.685089 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ddec21a9-43c9-4885-abde-9e65c9a8762d-audit-dir\") pod \"ddec21a9-43c9-4885-abde-9e65c9a8762d\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.685793 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ddec21a9-43c9-4885-abde-9e65c9a8762d" (UID: "ddec21a9-43c9-4885-abde-9e65c9a8762d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.685866 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddec21a9-43c9-4885-abde-9e65c9a8762d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ddec21a9-43c9-4885-abde-9e65c9a8762d" (UID: "ddec21a9-43c9-4885-abde-9e65c9a8762d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.686268 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-trusted-ca-bundle\") pod \"ddec21a9-43c9-4885-abde-9e65c9a8762d\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.686342 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-template-login\") pod \"ddec21a9-43c9-4885-abde-9e65c9a8762d\" (UID: \"ddec21a9-43c9-4885-abde-9e65c9a8762d\") " Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.686539 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ddec21a9-43c9-4885-abde-9e65c9a8762d" (UID: "ddec21a9-43c9-4885-abde-9e65c9a8762d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.686679 4955 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ddec21a9-43c9-4885-abde-9e65c9a8762d-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.686713 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.686731 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.686886 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ddec21a9-43c9-4885-abde-9e65c9a8762d" (UID: "ddec21a9-43c9-4885-abde-9e65c9a8762d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.686905 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ddec21a9-43c9-4885-abde-9e65c9a8762d" (UID: "ddec21a9-43c9-4885-abde-9e65c9a8762d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.690821 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ddec21a9-43c9-4885-abde-9e65c9a8762d" (UID: "ddec21a9-43c9-4885-abde-9e65c9a8762d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.691122 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ddec21a9-43c9-4885-abde-9e65c9a8762d" (UID: "ddec21a9-43c9-4885-abde-9e65c9a8762d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.691435 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddec21a9-43c9-4885-abde-9e65c9a8762d-kube-api-access-rd6dh" (OuterVolumeSpecName: "kube-api-access-rd6dh") pod "ddec21a9-43c9-4885-abde-9e65c9a8762d" (UID: "ddec21a9-43c9-4885-abde-9e65c9a8762d"). InnerVolumeSpecName "kube-api-access-rd6dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.693861 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ddec21a9-43c9-4885-abde-9e65c9a8762d" (UID: "ddec21a9-43c9-4885-abde-9e65c9a8762d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.694257 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ddec21a9-43c9-4885-abde-9e65c9a8762d" (UID: "ddec21a9-43c9-4885-abde-9e65c9a8762d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.694877 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ddec21a9-43c9-4885-abde-9e65c9a8762d" (UID: "ddec21a9-43c9-4885-abde-9e65c9a8762d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.701739 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ddec21a9-43c9-4885-abde-9e65c9a8762d" (UID: "ddec21a9-43c9-4885-abde-9e65c9a8762d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.702788 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ddec21a9-43c9-4885-abde-9e65c9a8762d" (UID: "ddec21a9-43c9-4885-abde-9e65c9a8762d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.703044 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ddec21a9-43c9-4885-abde-9e65c9a8762d" (UID: "ddec21a9-43c9-4885-abde-9e65c9a8762d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.788211 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.788251 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.788435 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.788445 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd6dh\" (UniqueName: \"kubernetes.io/projected/ddec21a9-43c9-4885-abde-9e65c9a8762d-kube-api-access-rd6dh\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.788456 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.788468 4955 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ddec21a9-43c9-4885-abde-9e65c9a8762d-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.788478 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.788488 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.788523 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.788535 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:08 crc kubenswrapper[4955]: I0202 13:06:08.788545 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ddec21a9-43c9-4885-abde-9e65c9a8762d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:09 crc kubenswrapper[4955]: I0202 13:06:09.502008 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" event={"ID":"ddec21a9-43c9-4885-abde-9e65c9a8762d","Type":"ContainerDied","Data":"8d77498f6449fec965bdb062decdc8190ebe09a6583d431e7107f84fd1978ab8"} Feb 02 13:06:09 crc kubenswrapper[4955]: I0202 13:06:09.502066 4955 scope.go:117] "RemoveContainer" containerID="daa3b54a32e619bbe4ac4ab5fed8357c9e6e876192037a9dbd54b8ff10e95364" Feb 02 13:06:09 crc kubenswrapper[4955]: I0202 13:06:09.502074 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dmwz5" Feb 02 13:06:09 crc kubenswrapper[4955]: I0202 13:06:09.535118 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dmwz5"] Feb 02 13:06:09 crc kubenswrapper[4955]: I0202 13:06:09.538215 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dmwz5"] Feb 02 13:06:09 crc kubenswrapper[4955]: I0202 13:06:09.722020 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddec21a9-43c9-4885-abde-9e65c9a8762d" path="/var/lib/kubelet/pods/ddec21a9-43c9-4885-abde-9e65c9a8762d/volumes" Feb 02 13:06:10 crc kubenswrapper[4955]: I0202 13:06:10.576269 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fhpwq" Feb 02 13:06:10 crc kubenswrapper[4955]: I0202 13:06:10.622472 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fhpwq" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.757580 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6"] Feb 02 13:06:16 crc kubenswrapper[4955]: E0202 13:06:16.758268 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddec21a9-43c9-4885-abde-9e65c9a8762d" containerName="oauth-openshift" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.758280 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddec21a9-43c9-4885-abde-9e65c9a8762d" containerName="oauth-openshift" Feb 02 13:06:16 crc kubenswrapper[4955]: E0202 13:06:16.758290 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b349345a-3983-4555-8ad2-eb5808c83668" containerName="registry-server" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.758296 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="b349345a-3983-4555-8ad2-eb5808c83668" containerName="registry-server" Feb 02 13:06:16 crc kubenswrapper[4955]: E0202 13:06:16.758306 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b349345a-3983-4555-8ad2-eb5808c83668" containerName="extract-utilities" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.758312 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="b349345a-3983-4555-8ad2-eb5808c83668" containerName="extract-utilities" Feb 02 13:06:16 crc kubenswrapper[4955]: E0202 13:06:16.758327 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b349345a-3983-4555-8ad2-eb5808c83668" containerName="extract-content" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.758333 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="b349345a-3983-4555-8ad2-eb5808c83668" containerName="extract-content" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.758422 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddec21a9-43c9-4885-abde-9e65c9a8762d" containerName="oauth-openshift" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.758437 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="b349345a-3983-4555-8ad2-eb5808c83668" containerName="registry-server" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.758826 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.762509 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.762596 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.762879 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.763045 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.763229 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.764252 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.764389 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.764691 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.764730 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.765203 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.765628 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.767705 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.773232 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.788872 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.791180 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6"] Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.794319 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.915886 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.915935 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.915962 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.915977 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hwt2\" (UniqueName: \"kubernetes.io/projected/f73000b9-db20-45b7-9e84-426cfa1741fb-kube-api-access-8hwt2\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.916007 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.916023 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.916049 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.916081 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-system-session\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.916109 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.916125 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-user-template-login\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.916141 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.916157 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f73000b9-db20-45b7-9e84-426cfa1741fb-audit-policies\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.916177 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-user-template-error\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:16 crc kubenswrapper[4955]: I0202 13:06:16.916193 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f73000b9-db20-45b7-9e84-426cfa1741fb-audit-dir\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.017266 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.017330 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-system-session\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.017361 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.017375 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-user-template-login\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.017393 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.017413 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f73000b9-db20-45b7-9e84-426cfa1741fb-audit-policies\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.017435 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-user-template-error\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.017452 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f73000b9-db20-45b7-9e84-426cfa1741fb-audit-dir\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.017474 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.017495 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.017517 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.017532 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hwt2\" (UniqueName: \"kubernetes.io/projected/f73000b9-db20-45b7-9e84-426cfa1741fb-kube-api-access-8hwt2\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.017548 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.017582 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.018105 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f73000b9-db20-45b7-9e84-426cfa1741fb-audit-dir\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.018600 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.018836 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.019105 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f73000b9-db20-45b7-9e84-426cfa1741fb-audit-policies\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.019837 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.023377 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.023712 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-user-template-login\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.023822 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.024111 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.024427 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.024677 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.026058 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-user-template-error\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.041130 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f73000b9-db20-45b7-9e84-426cfa1741fb-v4-0-config-system-session\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.044685 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hwt2\" (UniqueName: \"kubernetes.io/projected/f73000b9-db20-45b7-9e84-426cfa1741fb-kube-api-access-8hwt2\") pod \"oauth-openshift-5dcd86cbbd-b2jn6\" (UID: \"f73000b9-db20-45b7-9e84-426cfa1741fb\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.086448 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.510222 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6"] Feb 02 13:06:17 crc kubenswrapper[4955]: W0202 13:06:17.518603 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf73000b9_db20_45b7_9e84_426cfa1741fb.slice/crio-4f0a51a12fb8a14b8d98c6304e4537373af963c3ef566d504b320eb091ea0555 WatchSource:0}: Error finding container 4f0a51a12fb8a14b8d98c6304e4537373af963c3ef566d504b320eb091ea0555: Status 404 returned error can't find the container with id 4f0a51a12fb8a14b8d98c6304e4537373af963c3ef566d504b320eb091ea0555 Feb 02 13:06:17 crc kubenswrapper[4955]: I0202 13:06:17.539923 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" event={"ID":"f73000b9-db20-45b7-9e84-426cfa1741fb","Type":"ContainerStarted","Data":"4f0a51a12fb8a14b8d98c6304e4537373af963c3ef566d504b320eb091ea0555"} Feb 02 13:06:18 crc kubenswrapper[4955]: I0202 13:06:18.546203 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" event={"ID":"f73000b9-db20-45b7-9e84-426cfa1741fb","Type":"ContainerStarted","Data":"7877f74e462ee883e6aef444d7096fbcf27f8de49b5e47aadc6e8b5e3a31eb5d"} Feb 02 13:06:18 crc kubenswrapper[4955]: I0202 13:06:18.546475 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:18 crc kubenswrapper[4955]: I0202 13:06:18.553520 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" Feb 02 13:06:18 crc kubenswrapper[4955]: I0202 13:06:18.595960 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5dcd86cbbd-b2jn6" podStartSLOduration=35.595941043 podStartE2EDuration="35.595941043s" podCreationTimestamp="2026-02-02 13:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:06:18.570263404 +0000 UTC m=+229.482599864" watchObservedRunningTime="2026-02-02 13:06:18.595941043 +0000 UTC m=+229.508277503" Feb 02 13:06:32 crc kubenswrapper[4955]: I0202 13:06:32.869313 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t5hd5"] Feb 02 13:06:32 crc kubenswrapper[4955]: I0202 13:06:32.870362 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t5hd5" podUID="9aa898a0-670e-4c3b-87c8-7d1d275fc6b5" containerName="registry-server" containerID="cri-o://8602b75e55f750170777a659c8d4eeac89bd15484cfe471692267a062cf89b17" gracePeriod=30 Feb 02 13:06:32 crc kubenswrapper[4955]: I0202 13:06:32.887877 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rvfwg"] Feb 02 13:06:32 crc kubenswrapper[4955]: I0202 13:06:32.888915 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rvfwg" podUID="bdc220a9-b1a9-4d3b-aba5-37820b63181f" containerName="registry-server" containerID="cri-o://36435453298de3954cf6153be62a879cfdf6e43a2b2bed5305d87a5449f073e9" gracePeriod=30 Feb 02 13:06:32 crc kubenswrapper[4955]: I0202 13:06:32.916250 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-74zm4"] Feb 02 13:06:32 crc kubenswrapper[4955]: I0202 13:06:32.916469 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-74zm4" podUID="e17bf741-cd77-4d87-aea5-663e5d2ba319" containerName="marketplace-operator" containerID="cri-o://4de12e4233a1c36ebc00faae3a54a9c56a3e4e0b70860d427d01e8e35c6c105d" gracePeriod=30 Feb 02 13:06:32 crc kubenswrapper[4955]: I0202 13:06:32.918632 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-btrhx"] Feb 02 13:06:32 crc kubenswrapper[4955]: I0202 13:06:32.918803 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-btrhx" podUID="86b11a5e-9d92-4f01-899d-51f7f9a2bbce" containerName="registry-server" containerID="cri-o://0816e22d69e4aa8eb4eee2ef52a4f8f476e8e6ac48c06d8a8dfa9715c55b4067" gracePeriod=30 Feb 02 13:06:32 crc kubenswrapper[4955]: I0202 13:06:32.929517 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tcjcw"] Feb 02 13:06:32 crc kubenswrapper[4955]: I0202 13:06:32.930389 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tcjcw" Feb 02 13:06:32 crc kubenswrapper[4955]: I0202 13:06:32.932166 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fhpwq"] Feb 02 13:06:32 crc kubenswrapper[4955]: I0202 13:06:32.932482 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fhpwq" podUID="c603a0e0-e73c-4d68-b3f5-947d61505f43" containerName="registry-server" containerID="cri-o://75e85fd20225cda15a8a8cfd26f1ed81f50e3cb2ae6645a65d9033066742eeeb" gracePeriod=30 Feb 02 13:06:32 crc kubenswrapper[4955]: I0202 13:06:32.936883 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tcjcw"] Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.002852 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c95ff8c4-bd53-45dc-85fb-3292fbd52e0f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tcjcw\" (UID: \"c95ff8c4-bd53-45dc-85fb-3292fbd52e0f\") " pod="openshift-marketplace/marketplace-operator-79b997595-tcjcw" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.002909 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-799pj\" (UniqueName: \"kubernetes.io/projected/c95ff8c4-bd53-45dc-85fb-3292fbd52e0f-kube-api-access-799pj\") pod \"marketplace-operator-79b997595-tcjcw\" (UID: \"c95ff8c4-bd53-45dc-85fb-3292fbd52e0f\") " pod="openshift-marketplace/marketplace-operator-79b997595-tcjcw" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.002956 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c95ff8c4-bd53-45dc-85fb-3292fbd52e0f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tcjcw\" (UID: \"c95ff8c4-bd53-45dc-85fb-3292fbd52e0f\") " pod="openshift-marketplace/marketplace-operator-79b997595-tcjcw" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.103745 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c95ff8c4-bd53-45dc-85fb-3292fbd52e0f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tcjcw\" (UID: \"c95ff8c4-bd53-45dc-85fb-3292fbd52e0f\") " pod="openshift-marketplace/marketplace-operator-79b997595-tcjcw" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.103800 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-799pj\" (UniqueName: \"kubernetes.io/projected/c95ff8c4-bd53-45dc-85fb-3292fbd52e0f-kube-api-access-799pj\") pod \"marketplace-operator-79b997595-tcjcw\" (UID: \"c95ff8c4-bd53-45dc-85fb-3292fbd52e0f\") " pod="openshift-marketplace/marketplace-operator-79b997595-tcjcw" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.103849 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c95ff8c4-bd53-45dc-85fb-3292fbd52e0f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tcjcw\" (UID: \"c95ff8c4-bd53-45dc-85fb-3292fbd52e0f\") " pod="openshift-marketplace/marketplace-operator-79b997595-tcjcw" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.106844 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c95ff8c4-bd53-45dc-85fb-3292fbd52e0f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tcjcw\" (UID: \"c95ff8c4-bd53-45dc-85fb-3292fbd52e0f\") " pod="openshift-marketplace/marketplace-operator-79b997595-tcjcw" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.110234 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c95ff8c4-bd53-45dc-85fb-3292fbd52e0f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tcjcw\" (UID: \"c95ff8c4-bd53-45dc-85fb-3292fbd52e0f\") " pod="openshift-marketplace/marketplace-operator-79b997595-tcjcw" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.123260 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-799pj\" (UniqueName: \"kubernetes.io/projected/c95ff8c4-bd53-45dc-85fb-3292fbd52e0f-kube-api-access-799pj\") pod \"marketplace-operator-79b997595-tcjcw\" (UID: \"c95ff8c4-bd53-45dc-85fb-3292fbd52e0f\") " pod="openshift-marketplace/marketplace-operator-79b997595-tcjcw" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.325381 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tcjcw" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.330081 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5hd5" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.337072 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-74zm4" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.398420 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btrhx" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.404794 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rvfwg" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.426456 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhpwq" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.508669 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aa898a0-670e-4c3b-87c8-7d1d275fc6b5-catalog-content\") pod \"9aa898a0-670e-4c3b-87c8-7d1d275fc6b5\" (UID: \"9aa898a0-670e-4c3b-87c8-7d1d275fc6b5\") " Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.508737 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc220a9-b1a9-4d3b-aba5-37820b63181f-catalog-content\") pod \"bdc220a9-b1a9-4d3b-aba5-37820b63181f\" (UID: \"bdc220a9-b1a9-4d3b-aba5-37820b63181f\") " Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.508777 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sswlj\" (UniqueName: \"kubernetes.io/projected/bdc220a9-b1a9-4d3b-aba5-37820b63181f-kube-api-access-sswlj\") pod \"bdc220a9-b1a9-4d3b-aba5-37820b63181f\" (UID: \"bdc220a9-b1a9-4d3b-aba5-37820b63181f\") " Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.508798 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgt4t\" (UniqueName: \"kubernetes.io/projected/9aa898a0-670e-4c3b-87c8-7d1d275fc6b5-kube-api-access-rgt4t\") pod \"9aa898a0-670e-4c3b-87c8-7d1d275fc6b5\" (UID: \"9aa898a0-670e-4c3b-87c8-7d1d275fc6b5\") " Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.508827 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86b11a5e-9d92-4f01-899d-51f7f9a2bbce-utilities\") pod \"86b11a5e-9d92-4f01-899d-51f7f9a2bbce\" (UID: \"86b11a5e-9d92-4f01-899d-51f7f9a2bbce\") " Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.508856 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56cmj\" (UniqueName: \"kubernetes.io/projected/86b11a5e-9d92-4f01-899d-51f7f9a2bbce-kube-api-access-56cmj\") pod \"86b11a5e-9d92-4f01-899d-51f7f9a2bbce\" (UID: \"86b11a5e-9d92-4f01-899d-51f7f9a2bbce\") " Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.508877 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86b11a5e-9d92-4f01-899d-51f7f9a2bbce-catalog-content\") pod \"86b11a5e-9d92-4f01-899d-51f7f9a2bbce\" (UID: \"86b11a5e-9d92-4f01-899d-51f7f9a2bbce\") " Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.508973 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e17bf741-cd77-4d87-aea5-663e5d2ba319-marketplace-trusted-ca\") pod \"e17bf741-cd77-4d87-aea5-663e5d2ba319\" (UID: \"e17bf741-cd77-4d87-aea5-663e5d2ba319\") " Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.508993 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aa898a0-670e-4c3b-87c8-7d1d275fc6b5-utilities\") pod \"9aa898a0-670e-4c3b-87c8-7d1d275fc6b5\" (UID: \"9aa898a0-670e-4c3b-87c8-7d1d275fc6b5\") " Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.509010 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72cmg\" (UniqueName: \"kubernetes.io/projected/c603a0e0-e73c-4d68-b3f5-947d61505f43-kube-api-access-72cmg\") pod \"c603a0e0-e73c-4d68-b3f5-947d61505f43\" (UID: \"c603a0e0-e73c-4d68-b3f5-947d61505f43\") " Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.509093 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc220a9-b1a9-4d3b-aba5-37820b63181f-utilities\") pod \"bdc220a9-b1a9-4d3b-aba5-37820b63181f\" (UID: \"bdc220a9-b1a9-4d3b-aba5-37820b63181f\") " Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.509110 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c603a0e0-e73c-4d68-b3f5-947d61505f43-utilities\") pod \"c603a0e0-e73c-4d68-b3f5-947d61505f43\" (UID: \"c603a0e0-e73c-4d68-b3f5-947d61505f43\") " Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.509129 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c603a0e0-e73c-4d68-b3f5-947d61505f43-catalog-content\") pod \"c603a0e0-e73c-4d68-b3f5-947d61505f43\" (UID: \"c603a0e0-e73c-4d68-b3f5-947d61505f43\") " Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.509148 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcwfm\" (UniqueName: \"kubernetes.io/projected/e17bf741-cd77-4d87-aea5-663e5d2ba319-kube-api-access-pcwfm\") pod \"e17bf741-cd77-4d87-aea5-663e5d2ba319\" (UID: \"e17bf741-cd77-4d87-aea5-663e5d2ba319\") " Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.509172 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e17bf741-cd77-4d87-aea5-663e5d2ba319-marketplace-operator-metrics\") pod \"e17bf741-cd77-4d87-aea5-663e5d2ba319\" (UID: \"e17bf741-cd77-4d87-aea5-663e5d2ba319\") " Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.511278 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86b11a5e-9d92-4f01-899d-51f7f9a2bbce-utilities" (OuterVolumeSpecName: "utilities") pod "86b11a5e-9d92-4f01-899d-51f7f9a2bbce" (UID: "86b11a5e-9d92-4f01-899d-51f7f9a2bbce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.513388 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86b11a5e-9d92-4f01-899d-51f7f9a2bbce-kube-api-access-56cmj" (OuterVolumeSpecName: "kube-api-access-56cmj") pod "86b11a5e-9d92-4f01-899d-51f7f9a2bbce" (UID: "86b11a5e-9d92-4f01-899d-51f7f9a2bbce"). InnerVolumeSpecName "kube-api-access-56cmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.516045 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdc220a9-b1a9-4d3b-aba5-37820b63181f-utilities" (OuterVolumeSpecName: "utilities") pod "bdc220a9-b1a9-4d3b-aba5-37820b63181f" (UID: "bdc220a9-b1a9-4d3b-aba5-37820b63181f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.526734 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9aa898a0-670e-4c3b-87c8-7d1d275fc6b5-utilities" (OuterVolumeSpecName: "utilities") pod "9aa898a0-670e-4c3b-87c8-7d1d275fc6b5" (UID: "9aa898a0-670e-4c3b-87c8-7d1d275fc6b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.527826 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdc220a9-b1a9-4d3b-aba5-37820b63181f-kube-api-access-sswlj" (OuterVolumeSpecName: "kube-api-access-sswlj") pod "bdc220a9-b1a9-4d3b-aba5-37820b63181f" (UID: "bdc220a9-b1a9-4d3b-aba5-37820b63181f"). InnerVolumeSpecName "kube-api-access-sswlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.529763 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e17bf741-cd77-4d87-aea5-663e5d2ba319-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "e17bf741-cd77-4d87-aea5-663e5d2ba319" (UID: "e17bf741-cd77-4d87-aea5-663e5d2ba319"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.538106 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c603a0e0-e73c-4d68-b3f5-947d61505f43-utilities" (OuterVolumeSpecName: "utilities") pod "c603a0e0-e73c-4d68-b3f5-947d61505f43" (UID: "c603a0e0-e73c-4d68-b3f5-947d61505f43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.543544 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e17bf741-cd77-4d87-aea5-663e5d2ba319-kube-api-access-pcwfm" (OuterVolumeSpecName: "kube-api-access-pcwfm") pod "e17bf741-cd77-4d87-aea5-663e5d2ba319" (UID: "e17bf741-cd77-4d87-aea5-663e5d2ba319"). InnerVolumeSpecName "kube-api-access-pcwfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.551084 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c603a0e0-e73c-4d68-b3f5-947d61505f43-kube-api-access-72cmg" (OuterVolumeSpecName: "kube-api-access-72cmg") pod "c603a0e0-e73c-4d68-b3f5-947d61505f43" (UID: "c603a0e0-e73c-4d68-b3f5-947d61505f43"). InnerVolumeSpecName "kube-api-access-72cmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.561691 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17bf741-cd77-4d87-aea5-663e5d2ba319-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "e17bf741-cd77-4d87-aea5-663e5d2ba319" (UID: "e17bf741-cd77-4d87-aea5-663e5d2ba319"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.565751 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aa898a0-670e-4c3b-87c8-7d1d275fc6b5-kube-api-access-rgt4t" (OuterVolumeSpecName: "kube-api-access-rgt4t") pod "9aa898a0-670e-4c3b-87c8-7d1d275fc6b5" (UID: "9aa898a0-670e-4c3b-87c8-7d1d275fc6b5"). InnerVolumeSpecName "kube-api-access-rgt4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.568264 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86b11a5e-9d92-4f01-899d-51f7f9a2bbce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86b11a5e-9d92-4f01-899d-51f7f9a2bbce" (UID: "86b11a5e-9d92-4f01-899d-51f7f9a2bbce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.600450 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdc220a9-b1a9-4d3b-aba5-37820b63181f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdc220a9-b1a9-4d3b-aba5-37820b63181f" (UID: "bdc220a9-b1a9-4d3b-aba5-37820b63181f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.602928 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9aa898a0-670e-4c3b-87c8-7d1d275fc6b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9aa898a0-670e-4c3b-87c8-7d1d275fc6b5" (UID: "9aa898a0-670e-4c3b-87c8-7d1d275fc6b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.610269 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc220a9-b1a9-4d3b-aba5-37820b63181f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.610297 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sswlj\" (UniqueName: \"kubernetes.io/projected/bdc220a9-b1a9-4d3b-aba5-37820b63181f-kube-api-access-sswlj\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.610308 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgt4t\" (UniqueName: \"kubernetes.io/projected/9aa898a0-670e-4c3b-87c8-7d1d275fc6b5-kube-api-access-rgt4t\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.610318 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86b11a5e-9d92-4f01-899d-51f7f9a2bbce-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.610327 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56cmj\" (UniqueName: \"kubernetes.io/projected/86b11a5e-9d92-4f01-899d-51f7f9a2bbce-kube-api-access-56cmj\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.610335 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86b11a5e-9d92-4f01-899d-51f7f9a2bbce-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.610343 4955 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e17bf741-cd77-4d87-aea5-663e5d2ba319-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.610353 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aa898a0-670e-4c3b-87c8-7d1d275fc6b5-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.610362 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72cmg\" (UniqueName: \"kubernetes.io/projected/c603a0e0-e73c-4d68-b3f5-947d61505f43-kube-api-access-72cmg\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.610370 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc220a9-b1a9-4d3b-aba5-37820b63181f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.610378 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c603a0e0-e73c-4d68-b3f5-947d61505f43-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.610385 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcwfm\" (UniqueName: \"kubernetes.io/projected/e17bf741-cd77-4d87-aea5-663e5d2ba319-kube-api-access-pcwfm\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.610393 4955 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e17bf741-cd77-4d87-aea5-663e5d2ba319-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.610401 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aa898a0-670e-4c3b-87c8-7d1d275fc6b5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.617101 4955 generic.go:334] "Generic (PLEG): container finished" podID="bdc220a9-b1a9-4d3b-aba5-37820b63181f" containerID="36435453298de3954cf6153be62a879cfdf6e43a2b2bed5305d87a5449f073e9" exitCode=0 Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.617150 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvfwg" event={"ID":"bdc220a9-b1a9-4d3b-aba5-37820b63181f","Type":"ContainerDied","Data":"36435453298de3954cf6153be62a879cfdf6e43a2b2bed5305d87a5449f073e9"} Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.617175 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvfwg" event={"ID":"bdc220a9-b1a9-4d3b-aba5-37820b63181f","Type":"ContainerDied","Data":"475d36199c217ed87748d1a11e6d024d0f544303578681dfe04f553bb940c304"} Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.617189 4955 scope.go:117] "RemoveContainer" containerID="36435453298de3954cf6153be62a879cfdf6e43a2b2bed5305d87a5449f073e9" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.617293 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rvfwg" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.627035 4955 generic.go:334] "Generic (PLEG): container finished" podID="86b11a5e-9d92-4f01-899d-51f7f9a2bbce" containerID="0816e22d69e4aa8eb4eee2ef52a4f8f476e8e6ac48c06d8a8dfa9715c55b4067" exitCode=0 Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.627107 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btrhx" event={"ID":"86b11a5e-9d92-4f01-899d-51f7f9a2bbce","Type":"ContainerDied","Data":"0816e22d69e4aa8eb4eee2ef52a4f8f476e8e6ac48c06d8a8dfa9715c55b4067"} Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.627139 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btrhx" event={"ID":"86b11a5e-9d92-4f01-899d-51f7f9a2bbce","Type":"ContainerDied","Data":"ce91035d10a5ccc0f66955f900259c83bdbb843a0e08b40930e676395a2dfba5"} Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.627223 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btrhx" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.633140 4955 generic.go:334] "Generic (PLEG): container finished" podID="e17bf741-cd77-4d87-aea5-663e5d2ba319" containerID="4de12e4233a1c36ebc00faae3a54a9c56a3e4e0b70860d427d01e8e35c6c105d" exitCode=0 Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.633217 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-74zm4" event={"ID":"e17bf741-cd77-4d87-aea5-663e5d2ba319","Type":"ContainerDied","Data":"4de12e4233a1c36ebc00faae3a54a9c56a3e4e0b70860d427d01e8e35c6c105d"} Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.633241 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-74zm4" event={"ID":"e17bf741-cd77-4d87-aea5-663e5d2ba319","Type":"ContainerDied","Data":"c788001b83b4f4c716c4310366d1bbc8d2f91bfa69384d64b2b7d49e59cedaf4"} Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.633293 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-74zm4" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.637315 4955 generic.go:334] "Generic (PLEG): container finished" podID="9aa898a0-670e-4c3b-87c8-7d1d275fc6b5" containerID="8602b75e55f750170777a659c8d4eeac89bd15484cfe471692267a062cf89b17" exitCode=0 Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.637381 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5hd5" event={"ID":"9aa898a0-670e-4c3b-87c8-7d1d275fc6b5","Type":"ContainerDied","Data":"8602b75e55f750170777a659c8d4eeac89bd15484cfe471692267a062cf89b17"} Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.637415 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5hd5" event={"ID":"9aa898a0-670e-4c3b-87c8-7d1d275fc6b5","Type":"ContainerDied","Data":"957c808df461c31833066d783fa88861acd3f2a4aebebfaedd0d035b49affa7e"} Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.637492 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5hd5" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.648039 4955 scope.go:117] "RemoveContainer" containerID="a5f11abd99e90ff090419c144a86cbe15193742ecf1c89a0afacfdb945df2336" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.651645 4955 generic.go:334] "Generic (PLEG): container finished" podID="c603a0e0-e73c-4d68-b3f5-947d61505f43" containerID="75e85fd20225cda15a8a8cfd26f1ed81f50e3cb2ae6645a65d9033066742eeeb" exitCode=0 Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.651688 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhpwq" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.651729 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhpwq" event={"ID":"c603a0e0-e73c-4d68-b3f5-947d61505f43","Type":"ContainerDied","Data":"75e85fd20225cda15a8a8cfd26f1ed81f50e3cb2ae6645a65d9033066742eeeb"} Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.652480 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhpwq" event={"ID":"c603a0e0-e73c-4d68-b3f5-947d61505f43","Type":"ContainerDied","Data":"2047a53cd80e2cffde343678e7a0df375bab3254772ddd10c953d56180252950"} Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.655444 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rvfwg"] Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.659322 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rvfwg"] Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.668225 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-74zm4"] Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.687629 4955 scope.go:117] "RemoveContainer" containerID="51df4ed2fb65882db625077c8cdecec209e31699c763aeb1c5f81c507a9939e7" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.689133 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-74zm4"] Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.708247 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-btrhx"] Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.715251 4955 scope.go:117] "RemoveContainer" containerID="36435453298de3954cf6153be62a879cfdf6e43a2b2bed5305d87a5449f073e9" Feb 02 13:06:33 crc kubenswrapper[4955]: E0202 13:06:33.715684 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36435453298de3954cf6153be62a879cfdf6e43a2b2bed5305d87a5449f073e9\": container with ID starting with 36435453298de3954cf6153be62a879cfdf6e43a2b2bed5305d87a5449f073e9 not found: ID does not exist" containerID="36435453298de3954cf6153be62a879cfdf6e43a2b2bed5305d87a5449f073e9" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.715726 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36435453298de3954cf6153be62a879cfdf6e43a2b2bed5305d87a5449f073e9"} err="failed to get container status \"36435453298de3954cf6153be62a879cfdf6e43a2b2bed5305d87a5449f073e9\": rpc error: code = NotFound desc = could not find container \"36435453298de3954cf6153be62a879cfdf6e43a2b2bed5305d87a5449f073e9\": container with ID starting with 36435453298de3954cf6153be62a879cfdf6e43a2b2bed5305d87a5449f073e9 not found: ID does not exist" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.715755 4955 scope.go:117] "RemoveContainer" containerID="a5f11abd99e90ff090419c144a86cbe15193742ecf1c89a0afacfdb945df2336" Feb 02 13:06:33 crc kubenswrapper[4955]: E0202 13:06:33.724461 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5f11abd99e90ff090419c144a86cbe15193742ecf1c89a0afacfdb945df2336\": container with ID starting with a5f11abd99e90ff090419c144a86cbe15193742ecf1c89a0afacfdb945df2336 not found: ID does not exist" containerID="a5f11abd99e90ff090419c144a86cbe15193742ecf1c89a0afacfdb945df2336" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.724542 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5f11abd99e90ff090419c144a86cbe15193742ecf1c89a0afacfdb945df2336"} err="failed to get container status \"a5f11abd99e90ff090419c144a86cbe15193742ecf1c89a0afacfdb945df2336\": rpc error: code = NotFound desc = could not find container \"a5f11abd99e90ff090419c144a86cbe15193742ecf1c89a0afacfdb945df2336\": container with ID starting with a5f11abd99e90ff090419c144a86cbe15193742ecf1c89a0afacfdb945df2336 not found: ID does not exist" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.724604 4955 scope.go:117] "RemoveContainer" containerID="51df4ed2fb65882db625077c8cdecec209e31699c763aeb1c5f81c507a9939e7" Feb 02 13:06:33 crc kubenswrapper[4955]: E0202 13:06:33.725027 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51df4ed2fb65882db625077c8cdecec209e31699c763aeb1c5f81c507a9939e7\": container with ID starting with 51df4ed2fb65882db625077c8cdecec209e31699c763aeb1c5f81c507a9939e7 not found: ID does not exist" containerID="51df4ed2fb65882db625077c8cdecec209e31699c763aeb1c5f81c507a9939e7" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.725068 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51df4ed2fb65882db625077c8cdecec209e31699c763aeb1c5f81c507a9939e7"} err="failed to get container status \"51df4ed2fb65882db625077c8cdecec209e31699c763aeb1c5f81c507a9939e7\": rpc error: code = NotFound desc = could not find container \"51df4ed2fb65882db625077c8cdecec209e31699c763aeb1c5f81c507a9939e7\": container with ID starting with 51df4ed2fb65882db625077c8cdecec209e31699c763aeb1c5f81c507a9939e7 not found: ID does not exist" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.725088 4955 scope.go:117] "RemoveContainer" containerID="0816e22d69e4aa8eb4eee2ef52a4f8f476e8e6ac48c06d8a8dfa9715c55b4067" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.737304 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c603a0e0-e73c-4d68-b3f5-947d61505f43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c603a0e0-e73c-4d68-b3f5-947d61505f43" (UID: "c603a0e0-e73c-4d68-b3f5-947d61505f43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.744434 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdc220a9-b1a9-4d3b-aba5-37820b63181f" path="/var/lib/kubelet/pods/bdc220a9-b1a9-4d3b-aba5-37820b63181f/volumes" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.746700 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e17bf741-cd77-4d87-aea5-663e5d2ba319" path="/var/lib/kubelet/pods/e17bf741-cd77-4d87-aea5-663e5d2ba319/volumes" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.748146 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-btrhx"] Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.748243 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t5hd5"] Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.748263 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t5hd5"] Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.758077 4955 scope.go:117] "RemoveContainer" containerID="bd9a673443d64897951d99b79e6699c1271d678d2fadf3f045336ade696a9838" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.779157 4955 scope.go:117] "RemoveContainer" containerID="75b2de72a39b65d35c2fc06b8a0aa9b101810f73a74cf86dad9a377a25e2963b" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.791170 4955 scope.go:117] "RemoveContainer" containerID="0816e22d69e4aa8eb4eee2ef52a4f8f476e8e6ac48c06d8a8dfa9715c55b4067" Feb 02 13:06:33 crc kubenswrapper[4955]: E0202 13:06:33.791736 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0816e22d69e4aa8eb4eee2ef52a4f8f476e8e6ac48c06d8a8dfa9715c55b4067\": container with ID starting with 0816e22d69e4aa8eb4eee2ef52a4f8f476e8e6ac48c06d8a8dfa9715c55b4067 not found: ID does not exist" containerID="0816e22d69e4aa8eb4eee2ef52a4f8f476e8e6ac48c06d8a8dfa9715c55b4067" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.791774 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0816e22d69e4aa8eb4eee2ef52a4f8f476e8e6ac48c06d8a8dfa9715c55b4067"} err="failed to get container status \"0816e22d69e4aa8eb4eee2ef52a4f8f476e8e6ac48c06d8a8dfa9715c55b4067\": rpc error: code = NotFound desc = could not find container \"0816e22d69e4aa8eb4eee2ef52a4f8f476e8e6ac48c06d8a8dfa9715c55b4067\": container with ID starting with 0816e22d69e4aa8eb4eee2ef52a4f8f476e8e6ac48c06d8a8dfa9715c55b4067 not found: ID does not exist" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.791802 4955 scope.go:117] "RemoveContainer" containerID="bd9a673443d64897951d99b79e6699c1271d678d2fadf3f045336ade696a9838" Feb 02 13:06:33 crc kubenswrapper[4955]: E0202 13:06:33.792048 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd9a673443d64897951d99b79e6699c1271d678d2fadf3f045336ade696a9838\": container with ID starting with bd9a673443d64897951d99b79e6699c1271d678d2fadf3f045336ade696a9838 not found: ID does not exist" containerID="bd9a673443d64897951d99b79e6699c1271d678d2fadf3f045336ade696a9838" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.792075 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd9a673443d64897951d99b79e6699c1271d678d2fadf3f045336ade696a9838"} err="failed to get container status \"bd9a673443d64897951d99b79e6699c1271d678d2fadf3f045336ade696a9838\": rpc error: code = NotFound desc = could not find container \"bd9a673443d64897951d99b79e6699c1271d678d2fadf3f045336ade696a9838\": container with ID starting with bd9a673443d64897951d99b79e6699c1271d678d2fadf3f045336ade696a9838 not found: ID does not exist" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.792091 4955 scope.go:117] "RemoveContainer" containerID="75b2de72a39b65d35c2fc06b8a0aa9b101810f73a74cf86dad9a377a25e2963b" Feb 02 13:06:33 crc kubenswrapper[4955]: E0202 13:06:33.792534 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75b2de72a39b65d35c2fc06b8a0aa9b101810f73a74cf86dad9a377a25e2963b\": container with ID starting with 75b2de72a39b65d35c2fc06b8a0aa9b101810f73a74cf86dad9a377a25e2963b not found: ID does not exist" containerID="75b2de72a39b65d35c2fc06b8a0aa9b101810f73a74cf86dad9a377a25e2963b" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.792577 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75b2de72a39b65d35c2fc06b8a0aa9b101810f73a74cf86dad9a377a25e2963b"} err="failed to get container status \"75b2de72a39b65d35c2fc06b8a0aa9b101810f73a74cf86dad9a377a25e2963b\": rpc error: code = NotFound desc = could not find container \"75b2de72a39b65d35c2fc06b8a0aa9b101810f73a74cf86dad9a377a25e2963b\": container with ID starting with 75b2de72a39b65d35c2fc06b8a0aa9b101810f73a74cf86dad9a377a25e2963b not found: ID does not exist" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.792590 4955 scope.go:117] "RemoveContainer" containerID="4de12e4233a1c36ebc00faae3a54a9c56a3e4e0b70860d427d01e8e35c6c105d" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.804616 4955 scope.go:117] "RemoveContainer" containerID="4de12e4233a1c36ebc00faae3a54a9c56a3e4e0b70860d427d01e8e35c6c105d" Feb 02 13:06:33 crc kubenswrapper[4955]: E0202 13:06:33.805097 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4de12e4233a1c36ebc00faae3a54a9c56a3e4e0b70860d427d01e8e35c6c105d\": container with ID starting with 4de12e4233a1c36ebc00faae3a54a9c56a3e4e0b70860d427d01e8e35c6c105d not found: ID does not exist" containerID="4de12e4233a1c36ebc00faae3a54a9c56a3e4e0b70860d427d01e8e35c6c105d" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.805133 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4de12e4233a1c36ebc00faae3a54a9c56a3e4e0b70860d427d01e8e35c6c105d"} err="failed to get container status \"4de12e4233a1c36ebc00faae3a54a9c56a3e4e0b70860d427d01e8e35c6c105d\": rpc error: code = NotFound desc = could not find container \"4de12e4233a1c36ebc00faae3a54a9c56a3e4e0b70860d427d01e8e35c6c105d\": container with ID starting with 4de12e4233a1c36ebc00faae3a54a9c56a3e4e0b70860d427d01e8e35c6c105d not found: ID does not exist" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.805184 4955 scope.go:117] "RemoveContainer" containerID="8602b75e55f750170777a659c8d4eeac89bd15484cfe471692267a062cf89b17" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.815221 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c603a0e0-e73c-4d68-b3f5-947d61505f43-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.816710 4955 scope.go:117] "RemoveContainer" containerID="de60dafc074fda1ce612d6933e1acd1cb83a510ed976a8f3001925317297122d" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.831261 4955 scope.go:117] "RemoveContainer" containerID="75414022d4edc3884b75b5883b6ef1093f7fe2268355a31c0a44f27e9407f453" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.844448 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tcjcw"] Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.845535 4955 scope.go:117] "RemoveContainer" containerID="8602b75e55f750170777a659c8d4eeac89bd15484cfe471692267a062cf89b17" Feb 02 13:06:33 crc kubenswrapper[4955]: E0202 13:06:33.846251 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8602b75e55f750170777a659c8d4eeac89bd15484cfe471692267a062cf89b17\": container with ID starting with 8602b75e55f750170777a659c8d4eeac89bd15484cfe471692267a062cf89b17 not found: ID does not exist" containerID="8602b75e55f750170777a659c8d4eeac89bd15484cfe471692267a062cf89b17" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.846320 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8602b75e55f750170777a659c8d4eeac89bd15484cfe471692267a062cf89b17"} err="failed to get container status \"8602b75e55f750170777a659c8d4eeac89bd15484cfe471692267a062cf89b17\": rpc error: code = NotFound desc = could not find container \"8602b75e55f750170777a659c8d4eeac89bd15484cfe471692267a062cf89b17\": container with ID starting with 8602b75e55f750170777a659c8d4eeac89bd15484cfe471692267a062cf89b17 not found: ID does not exist" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.846352 4955 scope.go:117] "RemoveContainer" containerID="de60dafc074fda1ce612d6933e1acd1cb83a510ed976a8f3001925317297122d" Feb 02 13:06:33 crc kubenswrapper[4955]: E0202 13:06:33.846815 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de60dafc074fda1ce612d6933e1acd1cb83a510ed976a8f3001925317297122d\": container with ID starting with de60dafc074fda1ce612d6933e1acd1cb83a510ed976a8f3001925317297122d not found: ID does not exist" containerID="de60dafc074fda1ce612d6933e1acd1cb83a510ed976a8f3001925317297122d" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.846871 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de60dafc074fda1ce612d6933e1acd1cb83a510ed976a8f3001925317297122d"} err="failed to get container status \"de60dafc074fda1ce612d6933e1acd1cb83a510ed976a8f3001925317297122d\": rpc error: code = NotFound desc = could not find container \"de60dafc074fda1ce612d6933e1acd1cb83a510ed976a8f3001925317297122d\": container with ID starting with de60dafc074fda1ce612d6933e1acd1cb83a510ed976a8f3001925317297122d not found: ID does not exist" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.846895 4955 scope.go:117] "RemoveContainer" containerID="75414022d4edc3884b75b5883b6ef1093f7fe2268355a31c0a44f27e9407f453" Feb 02 13:06:33 crc kubenswrapper[4955]: E0202 13:06:33.847168 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75414022d4edc3884b75b5883b6ef1093f7fe2268355a31c0a44f27e9407f453\": container with ID starting with 75414022d4edc3884b75b5883b6ef1093f7fe2268355a31c0a44f27e9407f453 not found: ID does not exist" containerID="75414022d4edc3884b75b5883b6ef1093f7fe2268355a31c0a44f27e9407f453" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.847192 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75414022d4edc3884b75b5883b6ef1093f7fe2268355a31c0a44f27e9407f453"} err="failed to get container status \"75414022d4edc3884b75b5883b6ef1093f7fe2268355a31c0a44f27e9407f453\": rpc error: code = NotFound desc = could not find container \"75414022d4edc3884b75b5883b6ef1093f7fe2268355a31c0a44f27e9407f453\": container with ID starting with 75414022d4edc3884b75b5883b6ef1093f7fe2268355a31c0a44f27e9407f453 not found: ID does not exist" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.847209 4955 scope.go:117] "RemoveContainer" containerID="75e85fd20225cda15a8a8cfd26f1ed81f50e3cb2ae6645a65d9033066742eeeb" Feb 02 13:06:33 crc kubenswrapper[4955]: W0202 13:06:33.852726 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc95ff8c4_bd53_45dc_85fb_3292fbd52e0f.slice/crio-3d338571c1d6cd3a45ab6ccb7007f1fd6cc4a3fc8293a6322d8a01f10f48672e WatchSource:0}: Error finding container 3d338571c1d6cd3a45ab6ccb7007f1fd6cc4a3fc8293a6322d8a01f10f48672e: Status 404 returned error can't find the container with id 3d338571c1d6cd3a45ab6ccb7007f1fd6cc4a3fc8293a6322d8a01f10f48672e Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.875222 4955 scope.go:117] "RemoveContainer" containerID="cc5f17670665d1d64905f7b334edadf4f0f7a1a3b6b2dc1f12e268c9ce9202e6" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.906015 4955 scope.go:117] "RemoveContainer" containerID="ab35a2e7ac41231e9423195c33ebc9c93351e72abf76eb9a4eef2aedea5f7a60" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.924043 4955 scope.go:117] "RemoveContainer" containerID="75e85fd20225cda15a8a8cfd26f1ed81f50e3cb2ae6645a65d9033066742eeeb" Feb 02 13:06:33 crc kubenswrapper[4955]: E0202 13:06:33.924659 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75e85fd20225cda15a8a8cfd26f1ed81f50e3cb2ae6645a65d9033066742eeeb\": container with ID starting with 75e85fd20225cda15a8a8cfd26f1ed81f50e3cb2ae6645a65d9033066742eeeb not found: ID does not exist" containerID="75e85fd20225cda15a8a8cfd26f1ed81f50e3cb2ae6645a65d9033066742eeeb" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.924756 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75e85fd20225cda15a8a8cfd26f1ed81f50e3cb2ae6645a65d9033066742eeeb"} err="failed to get container status \"75e85fd20225cda15a8a8cfd26f1ed81f50e3cb2ae6645a65d9033066742eeeb\": rpc error: code = NotFound desc = could not find container \"75e85fd20225cda15a8a8cfd26f1ed81f50e3cb2ae6645a65d9033066742eeeb\": container with ID starting with 75e85fd20225cda15a8a8cfd26f1ed81f50e3cb2ae6645a65d9033066742eeeb not found: ID does not exist" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.924854 4955 scope.go:117] "RemoveContainer" containerID="cc5f17670665d1d64905f7b334edadf4f0f7a1a3b6b2dc1f12e268c9ce9202e6" Feb 02 13:06:33 crc kubenswrapper[4955]: E0202 13:06:33.925310 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc5f17670665d1d64905f7b334edadf4f0f7a1a3b6b2dc1f12e268c9ce9202e6\": container with ID starting with cc5f17670665d1d64905f7b334edadf4f0f7a1a3b6b2dc1f12e268c9ce9202e6 not found: ID does not exist" containerID="cc5f17670665d1d64905f7b334edadf4f0f7a1a3b6b2dc1f12e268c9ce9202e6" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.925388 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc5f17670665d1d64905f7b334edadf4f0f7a1a3b6b2dc1f12e268c9ce9202e6"} err="failed to get container status \"cc5f17670665d1d64905f7b334edadf4f0f7a1a3b6b2dc1f12e268c9ce9202e6\": rpc error: code = NotFound desc = could not find container \"cc5f17670665d1d64905f7b334edadf4f0f7a1a3b6b2dc1f12e268c9ce9202e6\": container with ID starting with cc5f17670665d1d64905f7b334edadf4f0f7a1a3b6b2dc1f12e268c9ce9202e6 not found: ID does not exist" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.925453 4955 scope.go:117] "RemoveContainer" containerID="ab35a2e7ac41231e9423195c33ebc9c93351e72abf76eb9a4eef2aedea5f7a60" Feb 02 13:06:33 crc kubenswrapper[4955]: E0202 13:06:33.925826 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab35a2e7ac41231e9423195c33ebc9c93351e72abf76eb9a4eef2aedea5f7a60\": container with ID starting with ab35a2e7ac41231e9423195c33ebc9c93351e72abf76eb9a4eef2aedea5f7a60 not found: ID does not exist" containerID="ab35a2e7ac41231e9423195c33ebc9c93351e72abf76eb9a4eef2aedea5f7a60" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.925873 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab35a2e7ac41231e9423195c33ebc9c93351e72abf76eb9a4eef2aedea5f7a60"} err="failed to get container status \"ab35a2e7ac41231e9423195c33ebc9c93351e72abf76eb9a4eef2aedea5f7a60\": rpc error: code = NotFound desc = could not find container \"ab35a2e7ac41231e9423195c33ebc9c93351e72abf76eb9a4eef2aedea5f7a60\": container with ID starting with ab35a2e7ac41231e9423195c33ebc9c93351e72abf76eb9a4eef2aedea5f7a60 not found: ID does not exist" Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.971525 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fhpwq"] Feb 02 13:06:33 crc kubenswrapper[4955]: I0202 13:06:33.973666 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fhpwq"] Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.488965 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7xm64"] Feb 02 13:06:34 crc kubenswrapper[4955]: E0202 13:06:34.489198 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86b11a5e-9d92-4f01-899d-51f7f9a2bbce" containerName="extract-content" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.489218 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="86b11a5e-9d92-4f01-899d-51f7f9a2bbce" containerName="extract-content" Feb 02 13:06:34 crc kubenswrapper[4955]: E0202 13:06:34.489236 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c603a0e0-e73c-4d68-b3f5-947d61505f43" containerName="extract-content" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.489288 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="c603a0e0-e73c-4d68-b3f5-947d61505f43" containerName="extract-content" Feb 02 13:06:34 crc kubenswrapper[4955]: E0202 13:06:34.489301 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc220a9-b1a9-4d3b-aba5-37820b63181f" containerName="extract-utilities" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.489312 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc220a9-b1a9-4d3b-aba5-37820b63181f" containerName="extract-utilities" Feb 02 13:06:34 crc kubenswrapper[4955]: E0202 13:06:34.489368 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa898a0-670e-4c3b-87c8-7d1d275fc6b5" containerName="extract-content" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.489377 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa898a0-670e-4c3b-87c8-7d1d275fc6b5" containerName="extract-content" Feb 02 13:06:34 crc kubenswrapper[4955]: E0202 13:06:34.489387 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17bf741-cd77-4d87-aea5-663e5d2ba319" containerName="marketplace-operator" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.489395 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17bf741-cd77-4d87-aea5-663e5d2ba319" containerName="marketplace-operator" Feb 02 13:06:34 crc kubenswrapper[4955]: E0202 13:06:34.489406 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa898a0-670e-4c3b-87c8-7d1d275fc6b5" containerName="extract-utilities" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.489414 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa898a0-670e-4c3b-87c8-7d1d275fc6b5" containerName="extract-utilities" Feb 02 13:06:34 crc kubenswrapper[4955]: E0202 13:06:34.489424 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c603a0e0-e73c-4d68-b3f5-947d61505f43" containerName="extract-utilities" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.489433 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="c603a0e0-e73c-4d68-b3f5-947d61505f43" containerName="extract-utilities" Feb 02 13:06:34 crc kubenswrapper[4955]: E0202 13:06:34.489440 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc220a9-b1a9-4d3b-aba5-37820b63181f" containerName="registry-server" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.489447 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc220a9-b1a9-4d3b-aba5-37820b63181f" containerName="registry-server" Feb 02 13:06:34 crc kubenswrapper[4955]: E0202 13:06:34.489458 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c603a0e0-e73c-4d68-b3f5-947d61505f43" containerName="registry-server" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.489466 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="c603a0e0-e73c-4d68-b3f5-947d61505f43" containerName="registry-server" Feb 02 13:06:34 crc kubenswrapper[4955]: E0202 13:06:34.489479 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86b11a5e-9d92-4f01-899d-51f7f9a2bbce" containerName="extract-utilities" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.489486 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="86b11a5e-9d92-4f01-899d-51f7f9a2bbce" containerName="extract-utilities" Feb 02 13:06:34 crc kubenswrapper[4955]: E0202 13:06:34.489496 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa898a0-670e-4c3b-87c8-7d1d275fc6b5" containerName="registry-server" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.489504 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa898a0-670e-4c3b-87c8-7d1d275fc6b5" containerName="registry-server" Feb 02 13:06:34 crc kubenswrapper[4955]: E0202 13:06:34.489515 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc220a9-b1a9-4d3b-aba5-37820b63181f" containerName="extract-content" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.491621 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc220a9-b1a9-4d3b-aba5-37820b63181f" containerName="extract-content" Feb 02 13:06:34 crc kubenswrapper[4955]: E0202 13:06:34.491641 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86b11a5e-9d92-4f01-899d-51f7f9a2bbce" containerName="registry-server" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.491649 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="86b11a5e-9d92-4f01-899d-51f7f9a2bbce" containerName="registry-server" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.491794 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="86b11a5e-9d92-4f01-899d-51f7f9a2bbce" containerName="registry-server" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.491807 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aa898a0-670e-4c3b-87c8-7d1d275fc6b5" containerName="registry-server" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.491821 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="e17bf741-cd77-4d87-aea5-663e5d2ba319" containerName="marketplace-operator" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.491833 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdc220a9-b1a9-4d3b-aba5-37820b63181f" containerName="registry-server" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.491843 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="c603a0e0-e73c-4d68-b3f5-947d61505f43" containerName="registry-server" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.492719 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xm64" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.496870 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.499507 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7xm64"] Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.623077 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2-utilities\") pod \"certified-operators-7xm64\" (UID: \"da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2\") " pod="openshift-marketplace/certified-operators-7xm64" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.623120 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68x4h\" (UniqueName: \"kubernetes.io/projected/da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2-kube-api-access-68x4h\") pod \"certified-operators-7xm64\" (UID: \"da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2\") " pod="openshift-marketplace/certified-operators-7xm64" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.623174 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2-catalog-content\") pod \"certified-operators-7xm64\" (UID: \"da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2\") " pod="openshift-marketplace/certified-operators-7xm64" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.657485 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tcjcw" event={"ID":"c95ff8c4-bd53-45dc-85fb-3292fbd52e0f","Type":"ContainerStarted","Data":"6e860496533479fed6c724fad24d65011287889f0101959c9547c7b74865f20e"} Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.657544 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tcjcw" event={"ID":"c95ff8c4-bd53-45dc-85fb-3292fbd52e0f","Type":"ContainerStarted","Data":"3d338571c1d6cd3a45ab6ccb7007f1fd6cc4a3fc8293a6322d8a01f10f48672e"} Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.658630 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tcjcw" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.663156 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tcjcw" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.688462 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tcjcw" podStartSLOduration=2.688440365 podStartE2EDuration="2.688440365s" podCreationTimestamp="2026-02-02 13:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:06:34.673955563 +0000 UTC m=+245.586292013" watchObservedRunningTime="2026-02-02 13:06:34.688440365 +0000 UTC m=+245.600776815" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.724248 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2-utilities\") pod \"certified-operators-7xm64\" (UID: \"da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2\") " pod="openshift-marketplace/certified-operators-7xm64" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.724486 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68x4h\" (UniqueName: \"kubernetes.io/projected/da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2-kube-api-access-68x4h\") pod \"certified-operators-7xm64\" (UID: \"da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2\") " pod="openshift-marketplace/certified-operators-7xm64" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.724642 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2-catalog-content\") pod \"certified-operators-7xm64\" (UID: \"da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2\") " pod="openshift-marketplace/certified-operators-7xm64" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.725207 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2-catalog-content\") pod \"certified-operators-7xm64\" (UID: \"da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2\") " pod="openshift-marketplace/certified-operators-7xm64" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.725334 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2-utilities\") pod \"certified-operators-7xm64\" (UID: \"da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2\") " pod="openshift-marketplace/certified-operators-7xm64" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.744399 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68x4h\" (UniqueName: \"kubernetes.io/projected/da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2-kube-api-access-68x4h\") pod \"certified-operators-7xm64\" (UID: \"da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2\") " pod="openshift-marketplace/certified-operators-7xm64" Feb 02 13:06:34 crc kubenswrapper[4955]: I0202 13:06:34.818548 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xm64" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.179361 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7xm64"] Feb 02 13:06:35 crc kubenswrapper[4955]: W0202 13:06:35.183344 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda5b1d16_e075_4abc_8d7e_9f0f08c6b4b2.slice/crio-00ed981583e6035fb0b75ec7866ceb87f45afb10c6d12bb2586ba54236dc40a0 WatchSource:0}: Error finding container 00ed981583e6035fb0b75ec7866ceb87f45afb10c6d12bb2586ba54236dc40a0: Status 404 returned error can't find the container with id 00ed981583e6035fb0b75ec7866ceb87f45afb10c6d12bb2586ba54236dc40a0 Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.604315 4955 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.605127 4955 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.605214 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.605389 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69" gracePeriod=15 Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.605439 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d" gracePeriod=15 Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.605495 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde" gracePeriod=15 Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.605530 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410" gracePeriod=15 Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.605587 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42" gracePeriod=15 Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.611240 4955 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 13:06:35 crc kubenswrapper[4955]: E0202 13:06:35.611447 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.611459 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 13:06:35 crc kubenswrapper[4955]: E0202 13:06:35.611467 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.611474 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 13:06:35 crc kubenswrapper[4955]: E0202 13:06:35.611482 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.611488 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 13:06:35 crc kubenswrapper[4955]: E0202 13:06:35.611497 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.611502 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 13:06:35 crc kubenswrapper[4955]: E0202 13:06:35.611511 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.611517 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 13:06:35 crc kubenswrapper[4955]: E0202 13:06:35.611527 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.611540 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 13:06:35 crc kubenswrapper[4955]: E0202 13:06:35.611565 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.611571 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.611649 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.611657 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.611664 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.611672 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.611681 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.611687 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 13:06:35 crc kubenswrapper[4955]: E0202 13:06:35.611758 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.611764 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.611873 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.634531 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.634622 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.634643 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.634672 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.634695 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.634717 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.634744 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.634763 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.676106 4955 generic.go:334] "Generic (PLEG): container finished" podID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" containerID="3dcc4c6d7157dafa47b5209f4f0c5557a8a43669a54d22492d5fefd3bd73b348" exitCode=0 Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.676202 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xm64" event={"ID":"da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2","Type":"ContainerDied","Data":"3dcc4c6d7157dafa47b5209f4f0c5557a8a43669a54d22492d5fefd3bd73b348"} Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.676251 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xm64" event={"ID":"da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2","Type":"ContainerStarted","Data":"00ed981583e6035fb0b75ec7866ceb87f45afb10c6d12bb2586ba54236dc40a0"} Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.677113 4955 status_manager.go:851] "Failed to get status for pod" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" pod="openshift-marketplace/certified-operators-7xm64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7xm64\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.677432 4955 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:35 crc kubenswrapper[4955]: E0202 13:06:35.677735 4955 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.86:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-7xm64.18906fd0b17125ce openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-7xm64,UID:da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2,APIVersion:v1,ResourceVersion:29613,FieldPath:spec.initContainers{extract-content},},Reason:Pulling,Message:Pulling image \"registry.redhat.io/redhat/certified-operator-index:v4.18\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 13:06:35.67746811 +0000 UTC m=+246.589804560,LastTimestamp:2026-02-02 13:06:35.67746811 +0000 UTC m=+246.589804560,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.722482 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86b11a5e-9d92-4f01-899d-51f7f9a2bbce" path="/var/lib/kubelet/pods/86b11a5e-9d92-4f01-899d-51f7f9a2bbce/volumes" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.723143 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aa898a0-670e-4c3b-87c8-7d1d275fc6b5" path="/var/lib/kubelet/pods/9aa898a0-670e-4c3b-87c8-7d1d275fc6b5/volumes" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.723766 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c603a0e0-e73c-4d68-b3f5-947d61505f43" path="/var/lib/kubelet/pods/c603a0e0-e73c-4d68-b3f5-947d61505f43/volumes" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.735340 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.735385 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.735418 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.735468 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.735520 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.735530 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.735534 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.735951 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.736174 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.736257 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.736315 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.738145 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.739213 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.736358 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.740472 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:06:35 crc kubenswrapper[4955]: I0202 13:06:35.745737 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:06:36 crc kubenswrapper[4955]: I0202 13:06:36.686144 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xm64" event={"ID":"da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2","Type":"ContainerStarted","Data":"33b72168bb86635c109623d78ab335c434f836065ed2aec3c2e5623a3c6aef7f"} Feb 02 13:06:36 crc kubenswrapper[4955]: I0202 13:06:36.686921 4955 status_manager.go:851] "Failed to get status for pod" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" pod="openshift-marketplace/certified-operators-7xm64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7xm64\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:36 crc kubenswrapper[4955]: I0202 13:06:36.689007 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 13:06:36 crc kubenswrapper[4955]: I0202 13:06:36.690545 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 13:06:36 crc kubenswrapper[4955]: I0202 13:06:36.691440 4955 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42" exitCode=0 Feb 02 13:06:36 crc kubenswrapper[4955]: I0202 13:06:36.691463 4955 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d" exitCode=0 Feb 02 13:06:36 crc kubenswrapper[4955]: I0202 13:06:36.691473 4955 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde" exitCode=0 Feb 02 13:06:36 crc kubenswrapper[4955]: I0202 13:06:36.691482 4955 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410" exitCode=2 Feb 02 13:06:36 crc kubenswrapper[4955]: I0202 13:06:36.691541 4955 scope.go:117] "RemoveContainer" containerID="b18b027fcdd07250496dbe25ad6002b30c69c4672a7bdf64b57e1c1fbab8831a" Feb 02 13:06:36 crc kubenswrapper[4955]: I0202 13:06:36.693807 4955 generic.go:334] "Generic (PLEG): container finished" podID="087ff40a-30e1-4f8f-919f-1f7148cc69ed" containerID="93a7e02da7010bd6fa6c3d6070237568c239c58e43394204122b318bbc4d7a08" exitCode=0 Feb 02 13:06:36 crc kubenswrapper[4955]: I0202 13:06:36.693863 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"087ff40a-30e1-4f8f-919f-1f7148cc69ed","Type":"ContainerDied","Data":"93a7e02da7010bd6fa6c3d6070237568c239c58e43394204122b318bbc4d7a08"} Feb 02 13:06:36 crc kubenswrapper[4955]: I0202 13:06:36.694901 4955 status_manager.go:851] "Failed to get status for pod" podUID="087ff40a-30e1-4f8f-919f-1f7148cc69ed" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:36 crc kubenswrapper[4955]: I0202 13:06:36.696319 4955 status_manager.go:851] "Failed to get status for pod" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" pod="openshift-marketplace/certified-operators-7xm64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7xm64\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:37 crc kubenswrapper[4955]: I0202 13:06:37.718338 4955 generic.go:334] "Generic (PLEG): container finished" podID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" containerID="33b72168bb86635c109623d78ab335c434f836065ed2aec3c2e5623a3c6aef7f" exitCode=0 Feb 02 13:06:37 crc kubenswrapper[4955]: I0202 13:06:37.724832 4955 status_manager.go:851] "Failed to get status for pod" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" pod="openshift-marketplace/certified-operators-7xm64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7xm64\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:37 crc kubenswrapper[4955]: I0202 13:06:37.725174 4955 status_manager.go:851] "Failed to get status for pod" podUID="087ff40a-30e1-4f8f-919f-1f7148cc69ed" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:37 crc kubenswrapper[4955]: I0202 13:06:37.728745 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xm64" event={"ID":"da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2","Type":"ContainerDied","Data":"33b72168bb86635c109623d78ab335c434f836065ed2aec3c2e5623a3c6aef7f"} Feb 02 13:06:37 crc kubenswrapper[4955]: I0202 13:06:37.732907 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 13:06:37 crc kubenswrapper[4955]: I0202 13:06:37.980262 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 13:06:37 crc kubenswrapper[4955]: I0202 13:06:37.981424 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:06:37 crc kubenswrapper[4955]: I0202 13:06:37.982029 4955 status_manager.go:851] "Failed to get status for pod" podUID="087ff40a-30e1-4f8f-919f-1f7148cc69ed" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:37 crc kubenswrapper[4955]: I0202 13:06:37.982472 4955 status_manager.go:851] "Failed to get status for pod" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" pod="openshift-marketplace/certified-operators-7xm64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7xm64\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:37 crc kubenswrapper[4955]: I0202 13:06:37.982822 4955 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.011079 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.011615 4955 status_manager.go:851] "Failed to get status for pod" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" pod="openshift-marketplace/certified-operators-7xm64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7xm64\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.012010 4955 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.012283 4955 status_manager.go:851] "Failed to get status for pod" podUID="087ff40a-30e1-4f8f-919f-1f7148cc69ed" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.088845 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/087ff40a-30e1-4f8f-919f-1f7148cc69ed-kubelet-dir\") pod \"087ff40a-30e1-4f8f-919f-1f7148cc69ed\" (UID: \"087ff40a-30e1-4f8f-919f-1f7148cc69ed\") " Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.088954 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/087ff40a-30e1-4f8f-919f-1f7148cc69ed-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "087ff40a-30e1-4f8f-919f-1f7148cc69ed" (UID: "087ff40a-30e1-4f8f-919f-1f7148cc69ed"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.088960 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.089002 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.089009 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/087ff40a-30e1-4f8f-919f-1f7148cc69ed-var-lock\") pod \"087ff40a-30e1-4f8f-919f-1f7148cc69ed\" (UID: \"087ff40a-30e1-4f8f-919f-1f7148cc69ed\") " Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.089058 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.089105 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/087ff40a-30e1-4f8f-919f-1f7148cc69ed-var-lock" (OuterVolumeSpecName: "var-lock") pod "087ff40a-30e1-4f8f-919f-1f7148cc69ed" (UID: "087ff40a-30e1-4f8f-919f-1f7148cc69ed"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.089146 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.089188 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/087ff40a-30e1-4f8f-919f-1f7148cc69ed-kube-api-access\") pod \"087ff40a-30e1-4f8f-919f-1f7148cc69ed\" (UID: \"087ff40a-30e1-4f8f-919f-1f7148cc69ed\") " Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.089210 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.089209 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.089451 4955 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.089471 4955 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.089479 4955 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/087ff40a-30e1-4f8f-919f-1f7148cc69ed-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.089488 4955 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.089499 4955 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/087ff40a-30e1-4f8f-919f-1f7148cc69ed-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.096064 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/087ff40a-30e1-4f8f-919f-1f7148cc69ed-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "087ff40a-30e1-4f8f-919f-1f7148cc69ed" (UID: "087ff40a-30e1-4f8f-919f-1f7148cc69ed"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.191077 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/087ff40a-30e1-4f8f-919f-1f7148cc69ed-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.745644 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xm64" event={"ID":"da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2","Type":"ContainerStarted","Data":"5b35245b3edfd65e422cfa9c2619fedc4f894c465faa28b6d2c39b87d2992e81"} Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.746108 4955 status_manager.go:851] "Failed to get status for pod" podUID="087ff40a-30e1-4f8f-919f-1f7148cc69ed" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.746387 4955 status_manager.go:851] "Failed to get status for pod" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" pod="openshift-marketplace/certified-operators-7xm64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7xm64\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.746637 4955 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.749249 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.749941 4955 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69" exitCode=0 Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.750016 4955 scope.go:117] "RemoveContainer" containerID="e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.750036 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.752683 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"087ff40a-30e1-4f8f-919f-1f7148cc69ed","Type":"ContainerDied","Data":"324e31abeae06cbe9be9a63024762223785885dfbd70605668291d91fe55ce42"} Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.752712 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="324e31abeae06cbe9be9a63024762223785885dfbd70605668291d91fe55ce42" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.752734 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.773326 4955 status_manager.go:851] "Failed to get status for pod" podUID="087ff40a-30e1-4f8f-919f-1f7148cc69ed" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.773491 4955 status_manager.go:851] "Failed to get status for pod" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" pod="openshift-marketplace/certified-operators-7xm64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7xm64\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.773662 4955 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.778102 4955 scope.go:117] "RemoveContainer" containerID="08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.783032 4955 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.783281 4955 status_manager.go:851] "Failed to get status for pod" podUID="087ff40a-30e1-4f8f-919f-1f7148cc69ed" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.783541 4955 status_manager.go:851] "Failed to get status for pod" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" pod="openshift-marketplace/certified-operators-7xm64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7xm64\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.796867 4955 scope.go:117] "RemoveContainer" containerID="753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.824496 4955 scope.go:117] "RemoveContainer" containerID="1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.839013 4955 scope.go:117] "RemoveContainer" containerID="44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.859392 4955 scope.go:117] "RemoveContainer" containerID="ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.880426 4955 scope.go:117] "RemoveContainer" containerID="e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42" Feb 02 13:06:38 crc kubenswrapper[4955]: E0202 13:06:38.881074 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42\": container with ID starting with e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42 not found: ID does not exist" containerID="e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.881116 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42"} err="failed to get container status \"e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42\": rpc error: code = NotFound desc = could not find container \"e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42\": container with ID starting with e0bc50883795ca6ce20be2551fa8f1af6261cb78679bc31ac01b78e879238f42 not found: ID does not exist" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.881145 4955 scope.go:117] "RemoveContainer" containerID="08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d" Feb 02 13:06:38 crc kubenswrapper[4955]: E0202 13:06:38.881982 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\": container with ID starting with 08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d not found: ID does not exist" containerID="08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.882020 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d"} err="failed to get container status \"08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\": rpc error: code = NotFound desc = could not find container \"08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d\": container with ID starting with 08c6ac3a0714742af7b07ba0b974b0b47d3d93d47a588b0b288f54e16e381e9d not found: ID does not exist" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.882040 4955 scope.go:117] "RemoveContainer" containerID="753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde" Feb 02 13:06:38 crc kubenswrapper[4955]: E0202 13:06:38.882275 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\": container with ID starting with 753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde not found: ID does not exist" containerID="753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.882291 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde"} err="failed to get container status \"753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\": rpc error: code = NotFound desc = could not find container \"753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde\": container with ID starting with 753f1b9b09cb51341a30d15250e2f3aca9ab47c45c34518409a54e057764adde not found: ID does not exist" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.882304 4955 scope.go:117] "RemoveContainer" containerID="1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410" Feb 02 13:06:38 crc kubenswrapper[4955]: E0202 13:06:38.882583 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\": container with ID starting with 1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410 not found: ID does not exist" containerID="1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.882603 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410"} err="failed to get container status \"1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\": rpc error: code = NotFound desc = could not find container \"1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410\": container with ID starting with 1044531d65efe1122b30a236374a107353a6603e760d2d9eb060229258308410 not found: ID does not exist" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.882615 4955 scope.go:117] "RemoveContainer" containerID="44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69" Feb 02 13:06:38 crc kubenswrapper[4955]: E0202 13:06:38.882895 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\": container with ID starting with 44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69 not found: ID does not exist" containerID="44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.882930 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69"} err="failed to get container status \"44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\": rpc error: code = NotFound desc = could not find container \"44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69\": container with ID starting with 44097004c529698baee39ff8c6cafb6c40e09e431e97171e602af18343089b69 not found: ID does not exist" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.882980 4955 scope.go:117] "RemoveContainer" containerID="ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182" Feb 02 13:06:38 crc kubenswrapper[4955]: E0202 13:06:38.883426 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\": container with ID starting with ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182 not found: ID does not exist" containerID="ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182" Feb 02 13:06:38 crc kubenswrapper[4955]: I0202 13:06:38.883457 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182"} err="failed to get container status \"ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\": rpc error: code = NotFound desc = could not find container \"ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182\": container with ID starting with ad42430edc08b1e851f0945e6bfbcf4209193755a58fab05de4c24399cc56182 not found: ID does not exist" Feb 02 13:06:39 crc kubenswrapper[4955]: E0202 13:06:39.685777 4955 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:39 crc kubenswrapper[4955]: E0202 13:06:39.686217 4955 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:39 crc kubenswrapper[4955]: E0202 13:06:39.686845 4955 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:39 crc kubenswrapper[4955]: E0202 13:06:39.687260 4955 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:39 crc kubenswrapper[4955]: E0202 13:06:39.687711 4955 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:39 crc kubenswrapper[4955]: I0202 13:06:39.687743 4955 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 02 13:06:39 crc kubenswrapper[4955]: E0202 13:06:39.687954 4955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.86:6443: connect: connection refused" interval="200ms" Feb 02 13:06:39 crc kubenswrapper[4955]: I0202 13:06:39.718188 4955 status_manager.go:851] "Failed to get status for pod" podUID="087ff40a-30e1-4f8f-919f-1f7148cc69ed" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:39 crc kubenswrapper[4955]: I0202 13:06:39.718690 4955 status_manager.go:851] "Failed to get status for pod" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" pod="openshift-marketplace/certified-operators-7xm64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7xm64\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:39 crc kubenswrapper[4955]: I0202 13:06:39.718954 4955 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:39 crc kubenswrapper[4955]: I0202 13:06:39.722803 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 02 13:06:39 crc kubenswrapper[4955]: E0202 13:06:39.889354 4955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.86:6443: connect: connection refused" interval="400ms" Feb 02 13:06:40 crc kubenswrapper[4955]: E0202 13:06:40.108876 4955 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.86:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-7xm64.18906fd0b17125ce openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-7xm64,UID:da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2,APIVersion:v1,ResourceVersion:29613,FieldPath:spec.initContainers{extract-content},},Reason:Pulling,Message:Pulling image \"registry.redhat.io/redhat/certified-operator-index:v4.18\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 13:06:35.67746811 +0000 UTC m=+246.589804560,LastTimestamp:2026-02-02 13:06:35.67746811 +0000 UTC m=+246.589804560,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 13:06:40 crc kubenswrapper[4955]: E0202 13:06:40.290619 4955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.86:6443: connect: connection refused" interval="800ms" Feb 02 13:06:40 crc kubenswrapper[4955]: E0202 13:06:40.629598 4955 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.86:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:06:40 crc kubenswrapper[4955]: I0202 13:06:40.630369 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:06:40 crc kubenswrapper[4955]: I0202 13:06:40.764314 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"06310384ed2080f848a89f3617792f5beaad2518d4627b5f05d808ddb8f6ff4c"} Feb 02 13:06:41 crc kubenswrapper[4955]: E0202 13:06:41.091102 4955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.86:6443: connect: connection refused" interval="1.6s" Feb 02 13:06:41 crc kubenswrapper[4955]: I0202 13:06:41.770733 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9ffaa3a884f80384d59d9b6a3f02eb516494c20ef0252de3a85e6240d460ea2f"} Feb 02 13:06:41 crc kubenswrapper[4955]: I0202 13:06:41.771381 4955 status_manager.go:851] "Failed to get status for pod" podUID="087ff40a-30e1-4f8f-919f-1f7148cc69ed" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:41 crc kubenswrapper[4955]: E0202 13:06:41.771466 4955 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.86:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:06:41 crc kubenswrapper[4955]: I0202 13:06:41.771730 4955 status_manager.go:851] "Failed to get status for pod" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" pod="openshift-marketplace/certified-operators-7xm64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7xm64\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:42 crc kubenswrapper[4955]: E0202 13:06:42.692541 4955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.86:6443: connect: connection refused" interval="3.2s" Feb 02 13:06:42 crc kubenswrapper[4955]: E0202 13:06:42.785080 4955 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.86:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:06:44 crc kubenswrapper[4955]: I0202 13:06:44.819090 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7xm64" Feb 02 13:06:44 crc kubenswrapper[4955]: I0202 13:06:44.821017 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7xm64" Feb 02 13:06:44 crc kubenswrapper[4955]: I0202 13:06:44.876191 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7xm64" Feb 02 13:06:44 crc kubenswrapper[4955]: I0202 13:06:44.876849 4955 status_manager.go:851] "Failed to get status for pod" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" pod="openshift-marketplace/certified-operators-7xm64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7xm64\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:44 crc kubenswrapper[4955]: I0202 13:06:44.877762 4955 status_manager.go:851] "Failed to get status for pod" podUID="087ff40a-30e1-4f8f-919f-1f7148cc69ed" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:45 crc kubenswrapper[4955]: I0202 13:06:45.833002 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7xm64" Feb 02 13:06:45 crc kubenswrapper[4955]: I0202 13:06:45.833599 4955 status_manager.go:851] "Failed to get status for pod" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" pod="openshift-marketplace/certified-operators-7xm64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7xm64\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:45 crc kubenswrapper[4955]: I0202 13:06:45.834009 4955 status_manager.go:851] "Failed to get status for pod" podUID="087ff40a-30e1-4f8f-919f-1f7148cc69ed" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:45 crc kubenswrapper[4955]: E0202 13:06:45.894294 4955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.86:6443: connect: connection refused" interval="6.4s" Feb 02 13:06:48 crc kubenswrapper[4955]: I0202 13:06:48.716015 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:06:48 crc kubenswrapper[4955]: I0202 13:06:48.717080 4955 status_manager.go:851] "Failed to get status for pod" podUID="087ff40a-30e1-4f8f-919f-1f7148cc69ed" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:48 crc kubenswrapper[4955]: I0202 13:06:48.717505 4955 status_manager.go:851] "Failed to get status for pod" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" pod="openshift-marketplace/certified-operators-7xm64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7xm64\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:48 crc kubenswrapper[4955]: I0202 13:06:48.731831 4955 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2a18b769-e25b-453d-9617-219f7e480b33" Feb 02 13:06:48 crc kubenswrapper[4955]: I0202 13:06:48.731859 4955 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2a18b769-e25b-453d-9617-219f7e480b33" Feb 02 13:06:48 crc kubenswrapper[4955]: E0202 13:06:48.732315 4955 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:06:48 crc kubenswrapper[4955]: I0202 13:06:48.732964 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:06:48 crc kubenswrapper[4955]: I0202 13:06:48.814943 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2fbb8ee2806ecd6b2c969c6f8e7ce0cc84d93b30f2dd1e85e63007ae8bcc57ff"} Feb 02 13:06:49 crc kubenswrapper[4955]: I0202 13:06:49.723431 4955 status_manager.go:851] "Failed to get status for pod" podUID="087ff40a-30e1-4f8f-919f-1f7148cc69ed" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:49 crc kubenswrapper[4955]: I0202 13:06:49.723839 4955 status_manager.go:851] "Failed to get status for pod" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" pod="openshift-marketplace/certified-operators-7xm64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7xm64\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:49 crc kubenswrapper[4955]: I0202 13:06:49.724413 4955 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:49 crc kubenswrapper[4955]: I0202 13:06:49.821316 4955 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="feedba41ecd2f026c28969ede857c1decd06cdd7114e587fc42cffcba1ed8031" exitCode=0 Feb 02 13:06:49 crc kubenswrapper[4955]: I0202 13:06:49.821408 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"feedba41ecd2f026c28969ede857c1decd06cdd7114e587fc42cffcba1ed8031"} Feb 02 13:06:49 crc kubenswrapper[4955]: I0202 13:06:49.821830 4955 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2a18b769-e25b-453d-9617-219f7e480b33" Feb 02 13:06:49 crc kubenswrapper[4955]: I0202 13:06:49.821869 4955 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2a18b769-e25b-453d-9617-219f7e480b33" Feb 02 13:06:49 crc kubenswrapper[4955]: I0202 13:06:49.822156 4955 status_manager.go:851] "Failed to get status for pod" podUID="087ff40a-30e1-4f8f-919f-1f7148cc69ed" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:49 crc kubenswrapper[4955]: E0202 13:06:49.822374 4955 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:06:49 crc kubenswrapper[4955]: I0202 13:06:49.822579 4955 status_manager.go:851] "Failed to get status for pod" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" pod="openshift-marketplace/certified-operators-7xm64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7xm64\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:49 crc kubenswrapper[4955]: I0202 13:06:49.822883 4955 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:49 crc kubenswrapper[4955]: I0202 13:06:49.825671 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 13:06:49 crc kubenswrapper[4955]: I0202 13:06:49.825719 4955 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="995a8c5f4d865cd956d49e6e7702944feb1fad045e53ed4e8e23e31c495443c5" exitCode=1 Feb 02 13:06:49 crc kubenswrapper[4955]: I0202 13:06:49.825743 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"995a8c5f4d865cd956d49e6e7702944feb1fad045e53ed4e8e23e31c495443c5"} Feb 02 13:06:49 crc kubenswrapper[4955]: I0202 13:06:49.826177 4955 scope.go:117] "RemoveContainer" containerID="995a8c5f4d865cd956d49e6e7702944feb1fad045e53ed4e8e23e31c495443c5" Feb 02 13:06:49 crc kubenswrapper[4955]: I0202 13:06:49.826407 4955 status_manager.go:851] "Failed to get status for pod" podUID="087ff40a-30e1-4f8f-919f-1f7148cc69ed" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:49 crc kubenswrapper[4955]: I0202 13:06:49.826694 4955 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:49 crc kubenswrapper[4955]: I0202 13:06:49.826906 4955 status_manager.go:851] "Failed to get status for pod" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" pod="openshift-marketplace/certified-operators-7xm64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7xm64\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:49 crc kubenswrapper[4955]: I0202 13:06:49.827072 4955 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.86:6443: connect: connection refused" Feb 02 13:06:50 crc kubenswrapper[4955]: I0202 13:06:50.012265 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:06:50 crc kubenswrapper[4955]: E0202 13:06:50.110601 4955 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.86:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-7xm64.18906fd0b17125ce openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-7xm64,UID:da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2,APIVersion:v1,ResourceVersion:29613,FieldPath:spec.initContainers{extract-content},},Reason:Pulling,Message:Pulling image \"registry.redhat.io/redhat/certified-operator-index:v4.18\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 13:06:35.67746811 +0000 UTC m=+246.589804560,LastTimestamp:2026-02-02 13:06:35.67746811 +0000 UTC m=+246.589804560,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 13:06:50 crc kubenswrapper[4955]: I0202 13:06:50.835845 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 13:06:50 crc kubenswrapper[4955]: I0202 13:06:50.835930 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"70e4e41263b1e839d5ebed3e28a0c9c5650f2397f60fdaf12b544644c5c1bd2c"} Feb 02 13:06:50 crc kubenswrapper[4955]: I0202 13:06:50.839335 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"518f99d2bdac4a071c72ce8d44c9bfef265383fa2fc6a486887be9b1d7e8e075"} Feb 02 13:06:50 crc kubenswrapper[4955]: I0202 13:06:50.839366 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cb1df51a7796d49fd675fdb36cefd5751005a0f7391ad37a298b9e534e709610"} Feb 02 13:06:50 crc kubenswrapper[4955]: I0202 13:06:50.839376 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"29968328a354e63f75fea7049a48a5c7810dffa568259e4322e42ce87c755474"} Feb 02 13:06:50 crc kubenswrapper[4955]: I0202 13:06:50.839385 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ddc4ca6c1b7990aeedc37e8bd648d05cab857886aa2075ae699abe592f3ce29b"} Feb 02 13:06:51 crc kubenswrapper[4955]: I0202 13:06:51.846358 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"18fadbe96eb67ea732857dc4331da89feb069438e54c951cd40d1a911f539c8d"} Feb 02 13:06:51 crc kubenswrapper[4955]: I0202 13:06:51.846823 4955 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2a18b769-e25b-453d-9617-219f7e480b33" Feb 02 13:06:51 crc kubenswrapper[4955]: I0202 13:06:51.846857 4955 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2a18b769-e25b-453d-9617-219f7e480b33" Feb 02 13:06:53 crc kubenswrapper[4955]: I0202 13:06:53.733680 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:06:53 crc kubenswrapper[4955]: I0202 13:06:53.733957 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:06:53 crc kubenswrapper[4955]: I0202 13:06:53.741951 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:06:56 crc kubenswrapper[4955]: I0202 13:06:56.879616 4955 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:06:56 crc kubenswrapper[4955]: I0202 13:06:56.989435 4955 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b5b4b0c0-0163-4fd8-a7cd-a58ac5fe34e5" Feb 02 13:06:57 crc kubenswrapper[4955]: I0202 13:06:57.004115 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:06:57 crc kubenswrapper[4955]: I0202 13:06:57.008328 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:06:57 crc kubenswrapper[4955]: I0202 13:06:57.515693 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:06:57 crc kubenswrapper[4955]: I0202 13:06:57.877039 4955 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2a18b769-e25b-453d-9617-219f7e480b33" Feb 02 13:06:57 crc kubenswrapper[4955]: I0202 13:06:57.877076 4955 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2a18b769-e25b-453d-9617-219f7e480b33" Feb 02 13:06:57 crc kubenswrapper[4955]: I0202 13:06:57.878512 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:06:57 crc kubenswrapper[4955]: I0202 13:06:57.882329 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:06:57 crc kubenswrapper[4955]: I0202 13:06:57.882544 4955 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b5b4b0c0-0163-4fd8-a7cd-a58ac5fe34e5" Feb 02 13:06:58 crc kubenswrapper[4955]: I0202 13:06:58.882894 4955 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2a18b769-e25b-453d-9617-219f7e480b33" Feb 02 13:06:58 crc kubenswrapper[4955]: I0202 13:06:58.883185 4955 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2a18b769-e25b-453d-9617-219f7e480b33" Feb 02 13:06:58 crc kubenswrapper[4955]: I0202 13:06:58.887271 4955 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b5b4b0c0-0163-4fd8-a7cd-a58ac5fe34e5" Feb 02 13:07:06 crc kubenswrapper[4955]: I0202 13:07:06.455060 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 13:07:06 crc kubenswrapper[4955]: I0202 13:07:06.526595 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 13:07:06 crc kubenswrapper[4955]: I0202 13:07:06.658888 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 13:07:06 crc kubenswrapper[4955]: I0202 13:07:06.769194 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 13:07:07 crc kubenswrapper[4955]: I0202 13:07:07.415493 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 13:07:07 crc kubenswrapper[4955]: I0202 13:07:07.518861 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:07:07 crc kubenswrapper[4955]: I0202 13:07:07.529471 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 13:07:07 crc kubenswrapper[4955]: I0202 13:07:07.665624 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 13:07:07 crc kubenswrapper[4955]: I0202 13:07:07.827529 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 13:07:07 crc kubenswrapper[4955]: I0202 13:07:07.958242 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 13:07:08 crc kubenswrapper[4955]: I0202 13:07:08.174495 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 13:07:08 crc kubenswrapper[4955]: I0202 13:07:08.216700 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 13:07:08 crc kubenswrapper[4955]: I0202 13:07:08.266645 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 13:07:08 crc kubenswrapper[4955]: I0202 13:07:08.380403 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 13:07:08 crc kubenswrapper[4955]: I0202 13:07:08.766186 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 13:07:08 crc kubenswrapper[4955]: I0202 13:07:08.795237 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 13:07:08 crc kubenswrapper[4955]: I0202 13:07:08.903474 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 13:07:08 crc kubenswrapper[4955]: I0202 13:07:08.946615 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 13:07:09 crc kubenswrapper[4955]: I0202 13:07:09.088782 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 13:07:09 crc kubenswrapper[4955]: I0202 13:07:09.383417 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 13:07:09 crc kubenswrapper[4955]: I0202 13:07:09.393061 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 13:07:09 crc kubenswrapper[4955]: I0202 13:07:09.463817 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 13:07:09 crc kubenswrapper[4955]: I0202 13:07:09.490358 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 13:07:09 crc kubenswrapper[4955]: I0202 13:07:09.575004 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 13:07:09 crc kubenswrapper[4955]: I0202 13:07:09.581180 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 13:07:09 crc kubenswrapper[4955]: I0202 13:07:09.660527 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 13:07:09 crc kubenswrapper[4955]: I0202 13:07:09.750384 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 13:07:09 crc kubenswrapper[4955]: I0202 13:07:09.781136 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 13:07:09 crc kubenswrapper[4955]: I0202 13:07:09.815809 4955 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 13:07:09 crc kubenswrapper[4955]: I0202 13:07:09.873817 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 13:07:09 crc kubenswrapper[4955]: I0202 13:07:09.931479 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 13:07:10 crc kubenswrapper[4955]: I0202 13:07:10.017234 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 13:07:10 crc kubenswrapper[4955]: I0202 13:07:10.179464 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 13:07:10 crc kubenswrapper[4955]: I0202 13:07:10.227918 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 13:07:10 crc kubenswrapper[4955]: I0202 13:07:10.319659 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 13:07:10 crc kubenswrapper[4955]: I0202 13:07:10.480090 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 13:07:10 crc kubenswrapper[4955]: I0202 13:07:10.485930 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 13:07:10 crc kubenswrapper[4955]: I0202 13:07:10.542179 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 13:07:10 crc kubenswrapper[4955]: I0202 13:07:10.543965 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 13:07:10 crc kubenswrapper[4955]: I0202 13:07:10.565031 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 13:07:10 crc kubenswrapper[4955]: I0202 13:07:10.604538 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 13:07:10 crc kubenswrapper[4955]: I0202 13:07:10.714516 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 13:07:10 crc kubenswrapper[4955]: I0202 13:07:10.795060 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 13:07:10 crc kubenswrapper[4955]: I0202 13:07:10.822956 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 13:07:10 crc kubenswrapper[4955]: I0202 13:07:10.984162 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 13:07:10 crc kubenswrapper[4955]: I0202 13:07:10.997508 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 13:07:11 crc kubenswrapper[4955]: I0202 13:07:11.034146 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 13:07:11 crc kubenswrapper[4955]: I0202 13:07:11.049180 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 13:07:11 crc kubenswrapper[4955]: I0202 13:07:11.132382 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 13:07:11 crc kubenswrapper[4955]: I0202 13:07:11.133526 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 13:07:11 crc kubenswrapper[4955]: I0202 13:07:11.199692 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 13:07:11 crc kubenswrapper[4955]: I0202 13:07:11.207769 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 13:07:11 crc kubenswrapper[4955]: I0202 13:07:11.219275 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 13:07:11 crc kubenswrapper[4955]: I0202 13:07:11.323836 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 13:07:11 crc kubenswrapper[4955]: I0202 13:07:11.330627 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 13:07:11 crc kubenswrapper[4955]: I0202 13:07:11.340655 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 13:07:11 crc kubenswrapper[4955]: I0202 13:07:11.371898 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 13:07:11 crc kubenswrapper[4955]: I0202 13:07:11.402143 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 13:07:11 crc kubenswrapper[4955]: I0202 13:07:11.544310 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 13:07:11 crc kubenswrapper[4955]: I0202 13:07:11.615469 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 13:07:11 crc kubenswrapper[4955]: I0202 13:07:11.620980 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 13:07:11 crc kubenswrapper[4955]: I0202 13:07:11.770978 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 13:07:11 crc kubenswrapper[4955]: I0202 13:07:11.921293 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 13:07:11 crc kubenswrapper[4955]: I0202 13:07:11.921820 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 13:07:11 crc kubenswrapper[4955]: I0202 13:07:11.971027 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 13:07:12 crc kubenswrapper[4955]: I0202 13:07:12.010128 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 13:07:12 crc kubenswrapper[4955]: I0202 13:07:12.069166 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 13:07:12 crc kubenswrapper[4955]: I0202 13:07:12.158804 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 13:07:12 crc kubenswrapper[4955]: I0202 13:07:12.329423 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 13:07:12 crc kubenswrapper[4955]: I0202 13:07:12.413444 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 13:07:12 crc kubenswrapper[4955]: I0202 13:07:12.436601 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 13:07:12 crc kubenswrapper[4955]: I0202 13:07:12.464627 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 13:07:12 crc kubenswrapper[4955]: I0202 13:07:12.506515 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 13:07:12 crc kubenswrapper[4955]: I0202 13:07:12.591024 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 13:07:12 crc kubenswrapper[4955]: I0202 13:07:12.613351 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 13:07:12 crc kubenswrapper[4955]: I0202 13:07:12.659281 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 13:07:12 crc kubenswrapper[4955]: I0202 13:07:12.695190 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 13:07:12 crc kubenswrapper[4955]: I0202 13:07:12.727696 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 13:07:12 crc kubenswrapper[4955]: I0202 13:07:12.747976 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 13:07:12 crc kubenswrapper[4955]: I0202 13:07:12.833401 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 13:07:12 crc kubenswrapper[4955]: I0202 13:07:12.900710 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 13:07:12 crc kubenswrapper[4955]: I0202 13:07:12.932743 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 13:07:13 crc kubenswrapper[4955]: I0202 13:07:13.028819 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 13:07:13 crc kubenswrapper[4955]: I0202 13:07:13.100084 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 13:07:13 crc kubenswrapper[4955]: I0202 13:07:13.123780 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 13:07:13 crc kubenswrapper[4955]: I0202 13:07:13.168097 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 13:07:13 crc kubenswrapper[4955]: I0202 13:07:13.209315 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 13:07:13 crc kubenswrapper[4955]: I0202 13:07:13.271393 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 13:07:13 crc kubenswrapper[4955]: I0202 13:07:13.293292 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 13:07:13 crc kubenswrapper[4955]: I0202 13:07:13.528944 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 13:07:13 crc kubenswrapper[4955]: I0202 13:07:13.532995 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 13:07:13 crc kubenswrapper[4955]: I0202 13:07:13.549209 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 13:07:13 crc kubenswrapper[4955]: I0202 13:07:13.599540 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 13:07:13 crc kubenswrapper[4955]: I0202 13:07:13.617795 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 13:07:13 crc kubenswrapper[4955]: I0202 13:07:13.719624 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 13:07:13 crc kubenswrapper[4955]: I0202 13:07:13.724335 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 13:07:13 crc kubenswrapper[4955]: I0202 13:07:13.768412 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 13:07:13 crc kubenswrapper[4955]: I0202 13:07:13.838494 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 13:07:13 crc kubenswrapper[4955]: I0202 13:07:13.902407 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 13:07:13 crc kubenswrapper[4955]: I0202 13:07:13.908249 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 13:07:13 crc kubenswrapper[4955]: I0202 13:07:13.939933 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 13:07:13 crc kubenswrapper[4955]: I0202 13:07:13.940454 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 13:07:13 crc kubenswrapper[4955]: I0202 13:07:13.997367 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 13:07:14 crc kubenswrapper[4955]: I0202 13:07:14.139208 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 13:07:14 crc kubenswrapper[4955]: I0202 13:07:14.174076 4955 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 13:07:14 crc kubenswrapper[4955]: I0202 13:07:14.227724 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 13:07:14 crc kubenswrapper[4955]: I0202 13:07:14.254459 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 13:07:14 crc kubenswrapper[4955]: I0202 13:07:14.325312 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 13:07:14 crc kubenswrapper[4955]: I0202 13:07:14.387592 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 13:07:14 crc kubenswrapper[4955]: I0202 13:07:14.471655 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 13:07:14 crc kubenswrapper[4955]: I0202 13:07:14.529425 4955 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 13:07:14 crc kubenswrapper[4955]: I0202 13:07:14.556650 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 13:07:14 crc kubenswrapper[4955]: I0202 13:07:14.570730 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 13:07:14 crc kubenswrapper[4955]: I0202 13:07:14.637419 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 13:07:14 crc kubenswrapper[4955]: I0202 13:07:14.679010 4955 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 13:07:14 crc kubenswrapper[4955]: I0202 13:07:14.713811 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 13:07:14 crc kubenswrapper[4955]: I0202 13:07:14.813435 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 13:07:15 crc kubenswrapper[4955]: I0202 13:07:15.016907 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 13:07:15 crc kubenswrapper[4955]: I0202 13:07:15.057578 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 13:07:15 crc kubenswrapper[4955]: I0202 13:07:15.076456 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 13:07:15 crc kubenswrapper[4955]: I0202 13:07:15.115263 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 13:07:15 crc kubenswrapper[4955]: I0202 13:07:15.196294 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 13:07:15 crc kubenswrapper[4955]: I0202 13:07:15.501079 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 13:07:15 crc kubenswrapper[4955]: I0202 13:07:15.554764 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 13:07:15 crc kubenswrapper[4955]: I0202 13:07:15.657609 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 13:07:15 crc kubenswrapper[4955]: I0202 13:07:15.786467 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 13:07:15 crc kubenswrapper[4955]: I0202 13:07:15.802720 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 13:07:15 crc kubenswrapper[4955]: I0202 13:07:15.894230 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 13:07:16 crc kubenswrapper[4955]: I0202 13:07:16.107722 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 13:07:16 crc kubenswrapper[4955]: I0202 13:07:16.224343 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 13:07:16 crc kubenswrapper[4955]: I0202 13:07:16.335766 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 13:07:16 crc kubenswrapper[4955]: I0202 13:07:16.428773 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 13:07:16 crc kubenswrapper[4955]: I0202 13:07:16.506795 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 13:07:16 crc kubenswrapper[4955]: I0202 13:07:16.564190 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 13:07:16 crc kubenswrapper[4955]: I0202 13:07:16.580441 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 13:07:16 crc kubenswrapper[4955]: I0202 13:07:16.589313 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 13:07:16 crc kubenswrapper[4955]: I0202 13:07:16.600847 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 13:07:16 crc kubenswrapper[4955]: I0202 13:07:16.620071 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 13:07:16 crc kubenswrapper[4955]: I0202 13:07:16.632105 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 13:07:16 crc kubenswrapper[4955]: I0202 13:07:16.729864 4955 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 13:07:16 crc kubenswrapper[4955]: I0202 13:07:16.731376 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7xm64" podStartSLOduration=40.280833926 podStartE2EDuration="42.731355679s" podCreationTimestamp="2026-02-02 13:06:34 +0000 UTC" firstStartedPulling="2026-02-02 13:06:35.67746507 +0000 UTC m=+246.589801520" lastFinishedPulling="2026-02-02 13:06:38.127986823 +0000 UTC m=+249.040323273" observedRunningTime="2026-02-02 13:06:56.987113261 +0000 UTC m=+267.899449721" watchObservedRunningTime="2026-02-02 13:07:16.731355679 +0000 UTC m=+287.643692139" Feb 02 13:07:16 crc kubenswrapper[4955]: I0202 13:07:16.733570 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 13:07:16 crc kubenswrapper[4955]: I0202 13:07:16.734293 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 13:07:16 crc kubenswrapper[4955]: I0202 13:07:16.734340 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 13:07:16 crc kubenswrapper[4955]: I0202 13:07:16.737972 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:07:16 crc kubenswrapper[4955]: I0202 13:07:16.757254 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 13:07:16 crc kubenswrapper[4955]: I0202 13:07:16.771503 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.771484042 podStartE2EDuration="20.771484042s" podCreationTimestamp="2026-02-02 13:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:07:16.755850388 +0000 UTC m=+287.668186838" watchObservedRunningTime="2026-02-02 13:07:16.771484042 +0000 UTC m=+287.683820482" Feb 02 13:07:16 crc kubenswrapper[4955]: I0202 13:07:16.907045 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 13:07:16 crc kubenswrapper[4955]: I0202 13:07:16.954276 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 13:07:16 crc kubenswrapper[4955]: I0202 13:07:16.975039 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 13:07:16 crc kubenswrapper[4955]: I0202 13:07:16.991935 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 13:07:17 crc kubenswrapper[4955]: I0202 13:07:17.076808 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 13:07:17 crc kubenswrapper[4955]: I0202 13:07:17.089595 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 13:07:17 crc kubenswrapper[4955]: I0202 13:07:17.105021 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 13:07:17 crc kubenswrapper[4955]: I0202 13:07:17.201301 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 13:07:17 crc kubenswrapper[4955]: I0202 13:07:17.230741 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 13:07:17 crc kubenswrapper[4955]: I0202 13:07:17.274682 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 13:07:17 crc kubenswrapper[4955]: I0202 13:07:17.306089 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 13:07:17 crc kubenswrapper[4955]: I0202 13:07:17.311202 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 13:07:17 crc kubenswrapper[4955]: I0202 13:07:17.314264 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 13:07:17 crc kubenswrapper[4955]: I0202 13:07:17.470088 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 13:07:17 crc kubenswrapper[4955]: I0202 13:07:17.485690 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 13:07:17 crc kubenswrapper[4955]: I0202 13:07:17.531104 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 13:07:17 crc kubenswrapper[4955]: I0202 13:07:17.553195 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 13:07:17 crc kubenswrapper[4955]: I0202 13:07:17.606363 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 13:07:17 crc kubenswrapper[4955]: I0202 13:07:17.610795 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 13:07:17 crc kubenswrapper[4955]: I0202 13:07:17.640864 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 13:07:17 crc kubenswrapper[4955]: I0202 13:07:17.655501 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 13:07:17 crc kubenswrapper[4955]: I0202 13:07:17.660664 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 13:07:17 crc kubenswrapper[4955]: I0202 13:07:17.773059 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 13:07:17 crc kubenswrapper[4955]: I0202 13:07:17.787401 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 13:07:17 crc kubenswrapper[4955]: I0202 13:07:17.887926 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 13:07:17 crc kubenswrapper[4955]: I0202 13:07:17.908679 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 13:07:17 crc kubenswrapper[4955]: I0202 13:07:17.935832 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 13:07:18 crc kubenswrapper[4955]: I0202 13:07:18.123651 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 13:07:18 crc kubenswrapper[4955]: I0202 13:07:18.133968 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 13:07:18 crc kubenswrapper[4955]: I0202 13:07:18.138984 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 13:07:18 crc kubenswrapper[4955]: I0202 13:07:18.195985 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 13:07:18 crc kubenswrapper[4955]: I0202 13:07:18.211422 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 13:07:18 crc kubenswrapper[4955]: I0202 13:07:18.266034 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 13:07:18 crc kubenswrapper[4955]: I0202 13:07:18.346134 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 13:07:18 crc kubenswrapper[4955]: I0202 13:07:18.408167 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 13:07:18 crc kubenswrapper[4955]: I0202 13:07:18.429734 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 13:07:18 crc kubenswrapper[4955]: I0202 13:07:18.498894 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 13:07:18 crc kubenswrapper[4955]: I0202 13:07:18.508236 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 13:07:18 crc kubenswrapper[4955]: I0202 13:07:18.602611 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 13:07:18 crc kubenswrapper[4955]: I0202 13:07:18.643078 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 13:07:18 crc kubenswrapper[4955]: I0202 13:07:18.679507 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 13:07:18 crc kubenswrapper[4955]: I0202 13:07:18.687571 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 13:07:18 crc kubenswrapper[4955]: I0202 13:07:18.701438 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 13:07:18 crc kubenswrapper[4955]: I0202 13:07:18.804002 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 13:07:18 crc kubenswrapper[4955]: I0202 13:07:18.843927 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 13:07:18 crc kubenswrapper[4955]: I0202 13:07:18.893504 4955 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 13:07:18 crc kubenswrapper[4955]: I0202 13:07:18.977773 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 13:07:18 crc kubenswrapper[4955]: I0202 13:07:18.986927 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 13:07:19 crc kubenswrapper[4955]: I0202 13:07:19.020475 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 13:07:19 crc kubenswrapper[4955]: I0202 13:07:19.053257 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 13:07:19 crc kubenswrapper[4955]: I0202 13:07:19.065889 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 13:07:19 crc kubenswrapper[4955]: I0202 13:07:19.080727 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 13:07:19 crc kubenswrapper[4955]: I0202 13:07:19.172046 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 13:07:19 crc kubenswrapper[4955]: I0202 13:07:19.182711 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 13:07:19 crc kubenswrapper[4955]: I0202 13:07:19.227683 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 13:07:19 crc kubenswrapper[4955]: I0202 13:07:19.353479 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 13:07:19 crc kubenswrapper[4955]: I0202 13:07:19.387825 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 13:07:19 crc kubenswrapper[4955]: I0202 13:07:19.466834 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 13:07:19 crc kubenswrapper[4955]: I0202 13:07:19.483117 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 13:07:19 crc kubenswrapper[4955]: I0202 13:07:19.503440 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 13:07:19 crc kubenswrapper[4955]: I0202 13:07:19.555049 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 13:07:19 crc kubenswrapper[4955]: I0202 13:07:19.583068 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 13:07:19 crc kubenswrapper[4955]: I0202 13:07:19.616134 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 13:07:19 crc kubenswrapper[4955]: I0202 13:07:19.621865 4955 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 13:07:19 crc kubenswrapper[4955]: I0202 13:07:19.622236 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://9ffaa3a884f80384d59d9b6a3f02eb516494c20ef0252de3a85e6240d460ea2f" gracePeriod=5 Feb 02 13:07:19 crc kubenswrapper[4955]: I0202 13:07:19.671824 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 13:07:19 crc kubenswrapper[4955]: I0202 13:07:19.704855 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 13:07:19 crc kubenswrapper[4955]: I0202 13:07:19.849951 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 13:07:19 crc kubenswrapper[4955]: I0202 13:07:19.851588 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 13:07:19 crc kubenswrapper[4955]: I0202 13:07:19.938723 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 13:07:20 crc kubenswrapper[4955]: I0202 13:07:20.017399 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 13:07:20 crc kubenswrapper[4955]: I0202 13:07:20.038997 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 13:07:20 crc kubenswrapper[4955]: I0202 13:07:20.067175 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 13:07:20 crc kubenswrapper[4955]: I0202 13:07:20.119238 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 13:07:20 crc kubenswrapper[4955]: I0202 13:07:20.157300 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 13:07:20 crc kubenswrapper[4955]: I0202 13:07:20.280347 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 13:07:20 crc kubenswrapper[4955]: I0202 13:07:20.328172 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 13:07:20 crc kubenswrapper[4955]: I0202 13:07:20.480619 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 13:07:20 crc kubenswrapper[4955]: I0202 13:07:20.541014 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 13:07:20 crc kubenswrapper[4955]: I0202 13:07:20.562160 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 13:07:20 crc kubenswrapper[4955]: I0202 13:07:20.592489 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 13:07:20 crc kubenswrapper[4955]: I0202 13:07:20.875985 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 13:07:20 crc kubenswrapper[4955]: I0202 13:07:20.924172 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 13:07:21 crc kubenswrapper[4955]: I0202 13:07:21.154978 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 13:07:21 crc kubenswrapper[4955]: I0202 13:07:21.170676 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 13:07:21 crc kubenswrapper[4955]: I0202 13:07:21.314836 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 13:07:21 crc kubenswrapper[4955]: I0202 13:07:21.467992 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 13:07:21 crc kubenswrapper[4955]: I0202 13:07:21.485825 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 13:07:21 crc kubenswrapper[4955]: I0202 13:07:21.562540 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 13:07:21 crc kubenswrapper[4955]: I0202 13:07:21.587701 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 13:07:21 crc kubenswrapper[4955]: I0202 13:07:21.630904 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 13:07:21 crc kubenswrapper[4955]: I0202 13:07:21.704291 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 13:07:21 crc kubenswrapper[4955]: I0202 13:07:21.722862 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 13:07:21 crc kubenswrapper[4955]: I0202 13:07:21.789522 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 13:07:21 crc kubenswrapper[4955]: I0202 13:07:21.865821 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 13:07:21 crc kubenswrapper[4955]: I0202 13:07:21.979971 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 13:07:22 crc kubenswrapper[4955]: I0202 13:07:22.086248 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 13:07:22 crc kubenswrapper[4955]: I0202 13:07:22.170941 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 13:07:22 crc kubenswrapper[4955]: I0202 13:07:22.198004 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 13:07:22 crc kubenswrapper[4955]: I0202 13:07:22.547583 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 13:07:22 crc kubenswrapper[4955]: I0202 13:07:22.924937 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 13:07:23 crc kubenswrapper[4955]: I0202 13:07:23.237083 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 13:07:23 crc kubenswrapper[4955]: I0202 13:07:23.295138 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 13:07:23 crc kubenswrapper[4955]: I0202 13:07:23.376429 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 13:07:23 crc kubenswrapper[4955]: I0202 13:07:23.582193 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 13:07:23 crc kubenswrapper[4955]: I0202 13:07:23.686905 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 13:07:23 crc kubenswrapper[4955]: I0202 13:07:23.688130 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 13:07:23 crc kubenswrapper[4955]: I0202 13:07:23.839969 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 13:07:23 crc kubenswrapper[4955]: I0202 13:07:23.897309 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 13:07:24 crc kubenswrapper[4955]: I0202 13:07:24.669498 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 13:07:25 crc kubenswrapper[4955]: I0202 13:07:25.014082 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 13:07:25 crc kubenswrapper[4955]: I0202 13:07:25.014127 4955 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="9ffaa3a884f80384d59d9b6a3f02eb516494c20ef0252de3a85e6240d460ea2f" exitCode=137 Feb 02 13:07:25 crc kubenswrapper[4955]: I0202 13:07:25.205335 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 13:07:25 crc kubenswrapper[4955]: I0202 13:07:25.205684 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:07:25 crc kubenswrapper[4955]: I0202 13:07:25.245437 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 13:07:25 crc kubenswrapper[4955]: I0202 13:07:25.245508 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 13:07:25 crc kubenswrapper[4955]: I0202 13:07:25.245602 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:07:25 crc kubenswrapper[4955]: I0202 13:07:25.245623 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:07:25 crc kubenswrapper[4955]: I0202 13:07:25.245697 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 13:07:25 crc kubenswrapper[4955]: I0202 13:07:25.245775 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:07:25 crc kubenswrapper[4955]: I0202 13:07:25.245803 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 13:07:25 crc kubenswrapper[4955]: I0202 13:07:25.245838 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 13:07:25 crc kubenswrapper[4955]: I0202 13:07:25.245930 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:07:25 crc kubenswrapper[4955]: I0202 13:07:25.246116 4955 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:25 crc kubenswrapper[4955]: I0202 13:07:25.246133 4955 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:25 crc kubenswrapper[4955]: I0202 13:07:25.246143 4955 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:25 crc kubenswrapper[4955]: I0202 13:07:25.246151 4955 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:25 crc kubenswrapper[4955]: I0202 13:07:25.255121 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:07:25 crc kubenswrapper[4955]: I0202 13:07:25.347240 4955 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:25 crc kubenswrapper[4955]: I0202 13:07:25.723096 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 02 13:07:26 crc kubenswrapper[4955]: I0202 13:07:26.019579 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 13:07:26 crc kubenswrapper[4955]: I0202 13:07:26.019634 4955 scope.go:117] "RemoveContainer" containerID="9ffaa3a884f80384d59d9b6a3f02eb516494c20ef0252de3a85e6240d460ea2f" Feb 02 13:07:26 crc kubenswrapper[4955]: I0202 13:07:26.019722 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:07:29 crc kubenswrapper[4955]: I0202 13:07:29.525707 4955 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.114755 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh"] Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.115610 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" podUID="135262fe-e63f-4d62-8260-4a90ee8c1f26" containerName="route-controller-manager" containerID="cri-o://e81b06330e7fe96262de4a749472417832064487c67b16c7ef334c89f6714c86" gracePeriod=30 Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.118230 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g2vv4"] Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.118439 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" podUID="586f9380-1574-4d6b-847d-d775fc1508b0" containerName="controller-manager" containerID="cri-o://7ddc0be3671d8e54562355c1415349d15e9e9c8c471e8b1d05d8892e44a27bb6" gracePeriod=30 Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.455619 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.462795 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.529778 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/586f9380-1574-4d6b-847d-d775fc1508b0-serving-cert\") pod \"586f9380-1574-4d6b-847d-d775fc1508b0\" (UID: \"586f9380-1574-4d6b-847d-d775fc1508b0\") " Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.529871 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/135262fe-e63f-4d62-8260-4a90ee8c1f26-serving-cert\") pod \"135262fe-e63f-4d62-8260-4a90ee8c1f26\" (UID: \"135262fe-e63f-4d62-8260-4a90ee8c1f26\") " Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.529916 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6vzd\" (UniqueName: \"kubernetes.io/projected/135262fe-e63f-4d62-8260-4a90ee8c1f26-kube-api-access-g6vzd\") pod \"135262fe-e63f-4d62-8260-4a90ee8c1f26\" (UID: \"135262fe-e63f-4d62-8260-4a90ee8c1f26\") " Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.529947 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/135262fe-e63f-4d62-8260-4a90ee8c1f26-config\") pod \"135262fe-e63f-4d62-8260-4a90ee8c1f26\" (UID: \"135262fe-e63f-4d62-8260-4a90ee8c1f26\") " Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.529989 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7njn4\" (UniqueName: \"kubernetes.io/projected/586f9380-1574-4d6b-847d-d775fc1508b0-kube-api-access-7njn4\") pod \"586f9380-1574-4d6b-847d-d775fc1508b0\" (UID: \"586f9380-1574-4d6b-847d-d775fc1508b0\") " Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.530020 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/135262fe-e63f-4d62-8260-4a90ee8c1f26-client-ca\") pod \"135262fe-e63f-4d62-8260-4a90ee8c1f26\" (UID: \"135262fe-e63f-4d62-8260-4a90ee8c1f26\") " Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.530046 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/586f9380-1574-4d6b-847d-d775fc1508b0-proxy-ca-bundles\") pod \"586f9380-1574-4d6b-847d-d775fc1508b0\" (UID: \"586f9380-1574-4d6b-847d-d775fc1508b0\") " Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.530072 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/586f9380-1574-4d6b-847d-d775fc1508b0-client-ca\") pod \"586f9380-1574-4d6b-847d-d775fc1508b0\" (UID: \"586f9380-1574-4d6b-847d-d775fc1508b0\") " Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.530112 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586f9380-1574-4d6b-847d-d775fc1508b0-config\") pod \"586f9380-1574-4d6b-847d-d775fc1508b0\" (UID: \"586f9380-1574-4d6b-847d-d775fc1508b0\") " Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.531392 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/586f9380-1574-4d6b-847d-d775fc1508b0-config" (OuterVolumeSpecName: "config") pod "586f9380-1574-4d6b-847d-d775fc1508b0" (UID: "586f9380-1574-4d6b-847d-d775fc1508b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.531408 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/135262fe-e63f-4d62-8260-4a90ee8c1f26-config" (OuterVolumeSpecName: "config") pod "135262fe-e63f-4d62-8260-4a90ee8c1f26" (UID: "135262fe-e63f-4d62-8260-4a90ee8c1f26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.534931 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/135262fe-e63f-4d62-8260-4a90ee8c1f26-client-ca" (OuterVolumeSpecName: "client-ca") pod "135262fe-e63f-4d62-8260-4a90ee8c1f26" (UID: "135262fe-e63f-4d62-8260-4a90ee8c1f26"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.536525 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/586f9380-1574-4d6b-847d-d775fc1508b0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "586f9380-1574-4d6b-847d-d775fc1508b0" (UID: "586f9380-1574-4d6b-847d-d775fc1508b0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.536739 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/586f9380-1574-4d6b-847d-d775fc1508b0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "586f9380-1574-4d6b-847d-d775fc1508b0" (UID: "586f9380-1574-4d6b-847d-d775fc1508b0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.536735 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/586f9380-1574-4d6b-847d-d775fc1508b0-client-ca" (OuterVolumeSpecName: "client-ca") pod "586f9380-1574-4d6b-847d-d775fc1508b0" (UID: "586f9380-1574-4d6b-847d-d775fc1508b0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.537394 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/135262fe-e63f-4d62-8260-4a90ee8c1f26-kube-api-access-g6vzd" (OuterVolumeSpecName: "kube-api-access-g6vzd") pod "135262fe-e63f-4d62-8260-4a90ee8c1f26" (UID: "135262fe-e63f-4d62-8260-4a90ee8c1f26"). InnerVolumeSpecName "kube-api-access-g6vzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.537823 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/135262fe-e63f-4d62-8260-4a90ee8c1f26-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "135262fe-e63f-4d62-8260-4a90ee8c1f26" (UID: "135262fe-e63f-4d62-8260-4a90ee8c1f26"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.545913 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/586f9380-1574-4d6b-847d-d775fc1508b0-kube-api-access-7njn4" (OuterVolumeSpecName: "kube-api-access-7njn4") pod "586f9380-1574-4d6b-847d-d775fc1508b0" (UID: "586f9380-1574-4d6b-847d-d775fc1508b0"). InnerVolumeSpecName "kube-api-access-7njn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.631572 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/135262fe-e63f-4d62-8260-4a90ee8c1f26-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.631609 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6vzd\" (UniqueName: \"kubernetes.io/projected/135262fe-e63f-4d62-8260-4a90ee8c1f26-kube-api-access-g6vzd\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.631620 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/135262fe-e63f-4d62-8260-4a90ee8c1f26-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.631629 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7njn4\" (UniqueName: \"kubernetes.io/projected/586f9380-1574-4d6b-847d-d775fc1508b0-kube-api-access-7njn4\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.631638 4955 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/135262fe-e63f-4d62-8260-4a90ee8c1f26-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.631646 4955 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/586f9380-1574-4d6b-847d-d775fc1508b0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.631656 4955 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/586f9380-1574-4d6b-847d-d775fc1508b0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.631664 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586f9380-1574-4d6b-847d-d775fc1508b0-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:47 crc kubenswrapper[4955]: I0202 13:07:47.631671 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/586f9380-1574-4d6b-847d-d775fc1508b0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.134351 4955 generic.go:334] "Generic (PLEG): container finished" podID="586f9380-1574-4d6b-847d-d775fc1508b0" containerID="7ddc0be3671d8e54562355c1415349d15e9e9c8c471e8b1d05d8892e44a27bb6" exitCode=0 Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.134394 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.134406 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" event={"ID":"586f9380-1574-4d6b-847d-d775fc1508b0","Type":"ContainerDied","Data":"7ddc0be3671d8e54562355c1415349d15e9e9c8c471e8b1d05d8892e44a27bb6"} Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.134428 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-g2vv4" event={"ID":"586f9380-1574-4d6b-847d-d775fc1508b0","Type":"ContainerDied","Data":"fa8b5b2e7b7a25b83785a0de1fd0d721e867459a1baaddc180f7ec9f3051dc12"} Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.134444 4955 scope.go:117] "RemoveContainer" containerID="7ddc0be3671d8e54562355c1415349d15e9e9c8c471e8b1d05d8892e44a27bb6" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.137926 4955 generic.go:334] "Generic (PLEG): container finished" podID="135262fe-e63f-4d62-8260-4a90ee8c1f26" containerID="e81b06330e7fe96262de4a749472417832064487c67b16c7ef334c89f6714c86" exitCode=0 Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.137952 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" event={"ID":"135262fe-e63f-4d62-8260-4a90ee8c1f26","Type":"ContainerDied","Data":"e81b06330e7fe96262de4a749472417832064487c67b16c7ef334c89f6714c86"} Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.137959 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.137975 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh" event={"ID":"135262fe-e63f-4d62-8260-4a90ee8c1f26","Type":"ContainerDied","Data":"886ce91c06386ec84d0bebe543a31ee1d0b8d0143b88c46067f1f8884fb1c9f4"} Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.158391 4955 scope.go:117] "RemoveContainer" containerID="7ddc0be3671d8e54562355c1415349d15e9e9c8c471e8b1d05d8892e44a27bb6" Feb 02 13:07:48 crc kubenswrapper[4955]: E0202 13:07:48.159456 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ddc0be3671d8e54562355c1415349d15e9e9c8c471e8b1d05d8892e44a27bb6\": container with ID starting with 7ddc0be3671d8e54562355c1415349d15e9e9c8c471e8b1d05d8892e44a27bb6 not found: ID does not exist" containerID="7ddc0be3671d8e54562355c1415349d15e9e9c8c471e8b1d05d8892e44a27bb6" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.159519 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ddc0be3671d8e54562355c1415349d15e9e9c8c471e8b1d05d8892e44a27bb6"} err="failed to get container status \"7ddc0be3671d8e54562355c1415349d15e9e9c8c471e8b1d05d8892e44a27bb6\": rpc error: code = NotFound desc = could not find container \"7ddc0be3671d8e54562355c1415349d15e9e9c8c471e8b1d05d8892e44a27bb6\": container with ID starting with 7ddc0be3671d8e54562355c1415349d15e9e9c8c471e8b1d05d8892e44a27bb6 not found: ID does not exist" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.159623 4955 scope.go:117] "RemoveContainer" containerID="e81b06330e7fe96262de4a749472417832064487c67b16c7ef334c89f6714c86" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.163791 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g2vv4"] Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.169094 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g2vv4"] Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.173951 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh"] Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.175069 4955 scope.go:117] "RemoveContainer" containerID="e81b06330e7fe96262de4a749472417832064487c67b16c7ef334c89f6714c86" Feb 02 13:07:48 crc kubenswrapper[4955]: E0202 13:07:48.175547 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e81b06330e7fe96262de4a749472417832064487c67b16c7ef334c89f6714c86\": container with ID starting with e81b06330e7fe96262de4a749472417832064487c67b16c7ef334c89f6714c86 not found: ID does not exist" containerID="e81b06330e7fe96262de4a749472417832064487c67b16c7ef334c89f6714c86" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.175585 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e81b06330e7fe96262de4a749472417832064487c67b16c7ef334c89f6714c86"} err="failed to get container status \"e81b06330e7fe96262de4a749472417832064487c67b16c7ef334c89f6714c86\": rpc error: code = NotFound desc = could not find container \"e81b06330e7fe96262de4a749472417832064487c67b16c7ef334c89f6714c86\": container with ID starting with e81b06330e7fe96262de4a749472417832064487c67b16c7ef334c89f6714c86 not found: ID does not exist" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.177412 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9fbgh"] Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.819188 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-559465bd67-t8vhx"] Feb 02 13:07:48 crc kubenswrapper[4955]: E0202 13:07:48.819826 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135262fe-e63f-4d62-8260-4a90ee8c1f26" containerName="route-controller-manager" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.819974 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="135262fe-e63f-4d62-8260-4a90ee8c1f26" containerName="route-controller-manager" Feb 02 13:07:48 crc kubenswrapper[4955]: E0202 13:07:48.820024 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.820044 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 13:07:48 crc kubenswrapper[4955]: E0202 13:07:48.820066 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586f9380-1574-4d6b-847d-d775fc1508b0" containerName="controller-manager" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.820086 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="586f9380-1574-4d6b-847d-d775fc1508b0" containerName="controller-manager" Feb 02 13:07:48 crc kubenswrapper[4955]: E0202 13:07:48.820200 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="087ff40a-30e1-4f8f-919f-1f7148cc69ed" containerName="installer" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.820293 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="087ff40a-30e1-4f8f-919f-1f7148cc69ed" containerName="installer" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.820613 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="087ff40a-30e1-4f8f-919f-1f7148cc69ed" containerName="installer" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.820661 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="586f9380-1574-4d6b-847d-d775fc1508b0" containerName="controller-manager" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.820689 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.820716 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="135262fe-e63f-4d62-8260-4a90ee8c1f26" containerName="route-controller-manager" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.821515 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-559465bd67-t8vhx" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.824122 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cdfcd5df-t9nrk"] Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.825116 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-t9nrk" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.827133 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.827136 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.830457 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.830717 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.831031 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.831051 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.832999 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.833451 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.833734 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.833901 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.833984 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.834042 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.844085 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-559465bd67-t8vhx"] Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.844441 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.848520 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9168063-058c-4639-888b-b68045b30091-config\") pod \"controller-manager-559465bd67-t8vhx\" (UID: \"d9168063-058c-4639-888b-b68045b30091\") " pod="openshift-controller-manager/controller-manager-559465bd67-t8vhx" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.848692 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwvhq\" (UniqueName: \"kubernetes.io/projected/97f9b27d-e676-4d60-a308-20069a867cad-kube-api-access-hwvhq\") pod \"route-controller-manager-cdfcd5df-t9nrk\" (UID: \"97f9b27d-e676-4d60-a308-20069a867cad\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-t9nrk" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.848798 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f9b27d-e676-4d60-a308-20069a867cad-config\") pod \"route-controller-manager-cdfcd5df-t9nrk\" (UID: \"97f9b27d-e676-4d60-a308-20069a867cad\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-t9nrk" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.848870 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f9b27d-e676-4d60-a308-20069a867cad-client-ca\") pod \"route-controller-manager-cdfcd5df-t9nrk\" (UID: \"97f9b27d-e676-4d60-a308-20069a867cad\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-t9nrk" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.848949 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f9b27d-e676-4d60-a308-20069a867cad-serving-cert\") pod \"route-controller-manager-cdfcd5df-t9nrk\" (UID: \"97f9b27d-e676-4d60-a308-20069a867cad\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-t9nrk" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.849018 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9168063-058c-4639-888b-b68045b30091-serving-cert\") pod \"controller-manager-559465bd67-t8vhx\" (UID: \"d9168063-058c-4639-888b-b68045b30091\") " pod="openshift-controller-manager/controller-manager-559465bd67-t8vhx" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.849228 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9168063-058c-4639-888b-b68045b30091-proxy-ca-bundles\") pod \"controller-manager-559465bd67-t8vhx\" (UID: \"d9168063-058c-4639-888b-b68045b30091\") " pod="openshift-controller-manager/controller-manager-559465bd67-t8vhx" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.849306 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9168063-058c-4639-888b-b68045b30091-client-ca\") pod \"controller-manager-559465bd67-t8vhx\" (UID: \"d9168063-058c-4639-888b-b68045b30091\") " pod="openshift-controller-manager/controller-manager-559465bd67-t8vhx" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.849384 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sglc2\" (UniqueName: \"kubernetes.io/projected/d9168063-058c-4639-888b-b68045b30091-kube-api-access-sglc2\") pod \"controller-manager-559465bd67-t8vhx\" (UID: \"d9168063-058c-4639-888b-b68045b30091\") " pod="openshift-controller-manager/controller-manager-559465bd67-t8vhx" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.850232 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cdfcd5df-t9nrk"] Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.950788 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f9b27d-e676-4d60-a308-20069a867cad-config\") pod \"route-controller-manager-cdfcd5df-t9nrk\" (UID: \"97f9b27d-e676-4d60-a308-20069a867cad\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-t9nrk" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.950845 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f9b27d-e676-4d60-a308-20069a867cad-client-ca\") pod \"route-controller-manager-cdfcd5df-t9nrk\" (UID: \"97f9b27d-e676-4d60-a308-20069a867cad\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-t9nrk" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.950870 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f9b27d-e676-4d60-a308-20069a867cad-serving-cert\") pod \"route-controller-manager-cdfcd5df-t9nrk\" (UID: \"97f9b27d-e676-4d60-a308-20069a867cad\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-t9nrk" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.950896 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9168063-058c-4639-888b-b68045b30091-serving-cert\") pod \"controller-manager-559465bd67-t8vhx\" (UID: \"d9168063-058c-4639-888b-b68045b30091\") " pod="openshift-controller-manager/controller-manager-559465bd67-t8vhx" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.950920 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9168063-058c-4639-888b-b68045b30091-proxy-ca-bundles\") pod \"controller-manager-559465bd67-t8vhx\" (UID: \"d9168063-058c-4639-888b-b68045b30091\") " pod="openshift-controller-manager/controller-manager-559465bd67-t8vhx" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.950946 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9168063-058c-4639-888b-b68045b30091-client-ca\") pod \"controller-manager-559465bd67-t8vhx\" (UID: \"d9168063-058c-4639-888b-b68045b30091\") " pod="openshift-controller-manager/controller-manager-559465bd67-t8vhx" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.950978 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sglc2\" (UniqueName: \"kubernetes.io/projected/d9168063-058c-4639-888b-b68045b30091-kube-api-access-sglc2\") pod \"controller-manager-559465bd67-t8vhx\" (UID: \"d9168063-058c-4639-888b-b68045b30091\") " pod="openshift-controller-manager/controller-manager-559465bd67-t8vhx" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.951005 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9168063-058c-4639-888b-b68045b30091-config\") pod \"controller-manager-559465bd67-t8vhx\" (UID: \"d9168063-058c-4639-888b-b68045b30091\") " pod="openshift-controller-manager/controller-manager-559465bd67-t8vhx" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.951065 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwvhq\" (UniqueName: \"kubernetes.io/projected/97f9b27d-e676-4d60-a308-20069a867cad-kube-api-access-hwvhq\") pod \"route-controller-manager-cdfcd5df-t9nrk\" (UID: \"97f9b27d-e676-4d60-a308-20069a867cad\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-t9nrk" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.951974 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f9b27d-e676-4d60-a308-20069a867cad-client-ca\") pod \"route-controller-manager-cdfcd5df-t9nrk\" (UID: \"97f9b27d-e676-4d60-a308-20069a867cad\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-t9nrk" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.952589 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9168063-058c-4639-888b-b68045b30091-proxy-ca-bundles\") pod \"controller-manager-559465bd67-t8vhx\" (UID: \"d9168063-058c-4639-888b-b68045b30091\") " pod="openshift-controller-manager/controller-manager-559465bd67-t8vhx" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.952610 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9168063-058c-4639-888b-b68045b30091-client-ca\") pod \"controller-manager-559465bd67-t8vhx\" (UID: \"d9168063-058c-4639-888b-b68045b30091\") " pod="openshift-controller-manager/controller-manager-559465bd67-t8vhx" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.953427 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cdfcd5df-t9nrk"] Feb 02 13:07:48 crc kubenswrapper[4955]: E0202 13:07:48.953900 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config kube-api-access-hwvhq serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-t9nrk" podUID="97f9b27d-e676-4d60-a308-20069a867cad" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.953435 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f9b27d-e676-4d60-a308-20069a867cad-config\") pod \"route-controller-manager-cdfcd5df-t9nrk\" (UID: \"97f9b27d-e676-4d60-a308-20069a867cad\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-t9nrk" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.954058 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9168063-058c-4639-888b-b68045b30091-config\") pod \"controller-manager-559465bd67-t8vhx\" (UID: \"d9168063-058c-4639-888b-b68045b30091\") " pod="openshift-controller-manager/controller-manager-559465bd67-t8vhx" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.955522 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9168063-058c-4639-888b-b68045b30091-serving-cert\") pod \"controller-manager-559465bd67-t8vhx\" (UID: \"d9168063-058c-4639-888b-b68045b30091\") " pod="openshift-controller-manager/controller-manager-559465bd67-t8vhx" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.959343 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f9b27d-e676-4d60-a308-20069a867cad-serving-cert\") pod \"route-controller-manager-cdfcd5df-t9nrk\" (UID: \"97f9b27d-e676-4d60-a308-20069a867cad\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-t9nrk" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.959854 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-559465bd67-t8vhx"] Feb 02 13:07:48 crc kubenswrapper[4955]: E0202 13:07:48.960256 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-sglc2], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-559465bd67-t8vhx" podUID="d9168063-058c-4639-888b-b68045b30091" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.968831 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwvhq\" (UniqueName: \"kubernetes.io/projected/97f9b27d-e676-4d60-a308-20069a867cad-kube-api-access-hwvhq\") pod \"route-controller-manager-cdfcd5df-t9nrk\" (UID: \"97f9b27d-e676-4d60-a308-20069a867cad\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-t9nrk" Feb 02 13:07:48 crc kubenswrapper[4955]: I0202 13:07:48.974525 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sglc2\" (UniqueName: \"kubernetes.io/projected/d9168063-058c-4639-888b-b68045b30091-kube-api-access-sglc2\") pod \"controller-manager-559465bd67-t8vhx\" (UID: \"d9168063-058c-4639-888b-b68045b30091\") " pod="openshift-controller-manager/controller-manager-559465bd67-t8vhx" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.145968 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-559465bd67-t8vhx" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.146022 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-t9nrk" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.155009 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-559465bd67-t8vhx" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.159372 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-t9nrk" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.255111 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9168063-058c-4639-888b-b68045b30091-client-ca\") pod \"d9168063-058c-4639-888b-b68045b30091\" (UID: \"d9168063-058c-4639-888b-b68045b30091\") " Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.255194 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9168063-058c-4639-888b-b68045b30091-config\") pod \"d9168063-058c-4639-888b-b68045b30091\" (UID: \"d9168063-058c-4639-888b-b68045b30091\") " Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.255265 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwvhq\" (UniqueName: \"kubernetes.io/projected/97f9b27d-e676-4d60-a308-20069a867cad-kube-api-access-hwvhq\") pod \"97f9b27d-e676-4d60-a308-20069a867cad\" (UID: \"97f9b27d-e676-4d60-a308-20069a867cad\") " Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.255315 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f9b27d-e676-4d60-a308-20069a867cad-client-ca\") pod \"97f9b27d-e676-4d60-a308-20069a867cad\" (UID: \"97f9b27d-e676-4d60-a308-20069a867cad\") " Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.255377 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9168063-058c-4639-888b-b68045b30091-serving-cert\") pod \"d9168063-058c-4639-888b-b68045b30091\" (UID: \"d9168063-058c-4639-888b-b68045b30091\") " Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.255415 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9168063-058c-4639-888b-b68045b30091-proxy-ca-bundles\") pod \"d9168063-058c-4639-888b-b68045b30091\" (UID: \"d9168063-058c-4639-888b-b68045b30091\") " Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.255475 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sglc2\" (UniqueName: \"kubernetes.io/projected/d9168063-058c-4639-888b-b68045b30091-kube-api-access-sglc2\") pod \"d9168063-058c-4639-888b-b68045b30091\" (UID: \"d9168063-058c-4639-888b-b68045b30091\") " Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.255531 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f9b27d-e676-4d60-a308-20069a867cad-config\") pod \"97f9b27d-e676-4d60-a308-20069a867cad\" (UID: \"97f9b27d-e676-4d60-a308-20069a867cad\") " Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.255613 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f9b27d-e676-4d60-a308-20069a867cad-serving-cert\") pod \"97f9b27d-e676-4d60-a308-20069a867cad\" (UID: \"97f9b27d-e676-4d60-a308-20069a867cad\") " Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.255685 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9168063-058c-4639-888b-b68045b30091-client-ca" (OuterVolumeSpecName: "client-ca") pod "d9168063-058c-4639-888b-b68045b30091" (UID: "d9168063-058c-4639-888b-b68045b30091"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.255913 4955 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9168063-058c-4639-888b-b68045b30091-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.256126 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97f9b27d-e676-4d60-a308-20069a867cad-config" (OuterVolumeSpecName: "config") pod "97f9b27d-e676-4d60-a308-20069a867cad" (UID: "97f9b27d-e676-4d60-a308-20069a867cad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.256229 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9168063-058c-4639-888b-b68045b30091-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d9168063-058c-4639-888b-b68045b30091" (UID: "d9168063-058c-4639-888b-b68045b30091"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.256237 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97f9b27d-e676-4d60-a308-20069a867cad-client-ca" (OuterVolumeSpecName: "client-ca") pod "97f9b27d-e676-4d60-a308-20069a867cad" (UID: "97f9b27d-e676-4d60-a308-20069a867cad"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.256349 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9168063-058c-4639-888b-b68045b30091-config" (OuterVolumeSpecName: "config") pod "d9168063-058c-4639-888b-b68045b30091" (UID: "d9168063-058c-4639-888b-b68045b30091"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.258381 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9168063-058c-4639-888b-b68045b30091-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d9168063-058c-4639-888b-b68045b30091" (UID: "d9168063-058c-4639-888b-b68045b30091"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.258389 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97f9b27d-e676-4d60-a308-20069a867cad-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "97f9b27d-e676-4d60-a308-20069a867cad" (UID: "97f9b27d-e676-4d60-a308-20069a867cad"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.258536 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9168063-058c-4639-888b-b68045b30091-kube-api-access-sglc2" (OuterVolumeSpecName: "kube-api-access-sglc2") pod "d9168063-058c-4639-888b-b68045b30091" (UID: "d9168063-058c-4639-888b-b68045b30091"). InnerVolumeSpecName "kube-api-access-sglc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.260890 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97f9b27d-e676-4d60-a308-20069a867cad-kube-api-access-hwvhq" (OuterVolumeSpecName: "kube-api-access-hwvhq") pod "97f9b27d-e676-4d60-a308-20069a867cad" (UID: "97f9b27d-e676-4d60-a308-20069a867cad"). InnerVolumeSpecName "kube-api-access-hwvhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.356805 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9168063-058c-4639-888b-b68045b30091-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.356862 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwvhq\" (UniqueName: \"kubernetes.io/projected/97f9b27d-e676-4d60-a308-20069a867cad-kube-api-access-hwvhq\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.356885 4955 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f9b27d-e676-4d60-a308-20069a867cad-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.356903 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9168063-058c-4639-888b-b68045b30091-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.356919 4955 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9168063-058c-4639-888b-b68045b30091-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.356936 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sglc2\" (UniqueName: \"kubernetes.io/projected/d9168063-058c-4639-888b-b68045b30091-kube-api-access-sglc2\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.356955 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f9b27d-e676-4d60-a308-20069a867cad-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.356971 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f9b27d-e676-4d60-a308-20069a867cad-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.721637 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="135262fe-e63f-4d62-8260-4a90ee8c1f26" path="/var/lib/kubelet/pods/135262fe-e63f-4d62-8260-4a90ee8c1f26/volumes" Feb 02 13:07:49 crc kubenswrapper[4955]: I0202 13:07:49.722237 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="586f9380-1574-4d6b-847d-d775fc1508b0" path="/var/lib/kubelet/pods/586f9380-1574-4d6b-847d-d775fc1508b0/volumes" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.149429 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-t9nrk" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.149483 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-559465bd67-t8vhx" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.185717 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-559465bd67-t8vhx"] Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.193180 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-776fcd767f-l9vkb"] Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.194195 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.196078 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.196663 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.197149 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.197153 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.197216 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.198979 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.202265 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-559465bd67-t8vhx"] Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.214842 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-776fcd767f-l9vkb"] Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.218339 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.223419 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cdfcd5df-t9nrk"] Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.227874 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cdfcd5df-t9nrk"] Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.267847 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtrgk\" (UniqueName: \"kubernetes.io/projected/1a32dd82-8745-4353-bfb4-6e23c6bd784b-kube-api-access-wtrgk\") pod \"controller-manager-776fcd767f-l9vkb\" (UID: \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\") " pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.267911 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a32dd82-8745-4353-bfb4-6e23c6bd784b-serving-cert\") pod \"controller-manager-776fcd767f-l9vkb\" (UID: \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\") " pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.267942 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a32dd82-8745-4353-bfb4-6e23c6bd784b-proxy-ca-bundles\") pod \"controller-manager-776fcd767f-l9vkb\" (UID: \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\") " pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.267975 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a32dd82-8745-4353-bfb4-6e23c6bd784b-config\") pod \"controller-manager-776fcd767f-l9vkb\" (UID: \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\") " pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.268038 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a32dd82-8745-4353-bfb4-6e23c6bd784b-client-ca\") pod \"controller-manager-776fcd767f-l9vkb\" (UID: \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\") " pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.369659 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a32dd82-8745-4353-bfb4-6e23c6bd784b-client-ca\") pod \"controller-manager-776fcd767f-l9vkb\" (UID: \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\") " pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.369988 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtrgk\" (UniqueName: \"kubernetes.io/projected/1a32dd82-8745-4353-bfb4-6e23c6bd784b-kube-api-access-wtrgk\") pod \"controller-manager-776fcd767f-l9vkb\" (UID: \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\") " pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.370118 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a32dd82-8745-4353-bfb4-6e23c6bd784b-serving-cert\") pod \"controller-manager-776fcd767f-l9vkb\" (UID: \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\") " pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.370225 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a32dd82-8745-4353-bfb4-6e23c6bd784b-proxy-ca-bundles\") pod \"controller-manager-776fcd767f-l9vkb\" (UID: \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\") " pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.370360 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a32dd82-8745-4353-bfb4-6e23c6bd784b-config\") pod \"controller-manager-776fcd767f-l9vkb\" (UID: \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\") " pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.371173 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a32dd82-8745-4353-bfb4-6e23c6bd784b-client-ca\") pod \"controller-manager-776fcd767f-l9vkb\" (UID: \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\") " pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.371666 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a32dd82-8745-4353-bfb4-6e23c6bd784b-proxy-ca-bundles\") pod \"controller-manager-776fcd767f-l9vkb\" (UID: \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\") " pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.372462 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a32dd82-8745-4353-bfb4-6e23c6bd784b-config\") pod \"controller-manager-776fcd767f-l9vkb\" (UID: \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\") " pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.374926 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a32dd82-8745-4353-bfb4-6e23c6bd784b-serving-cert\") pod \"controller-manager-776fcd767f-l9vkb\" (UID: \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\") " pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.386129 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtrgk\" (UniqueName: \"kubernetes.io/projected/1a32dd82-8745-4353-bfb4-6e23c6bd784b-kube-api-access-wtrgk\") pod \"controller-manager-776fcd767f-l9vkb\" (UID: \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\") " pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.511584 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" Feb 02 13:07:50 crc kubenswrapper[4955]: I0202 13:07:50.698177 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-776fcd767f-l9vkb"] Feb 02 13:07:51 crc kubenswrapper[4955]: I0202 13:07:51.155647 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" event={"ID":"1a32dd82-8745-4353-bfb4-6e23c6bd784b","Type":"ContainerStarted","Data":"7c0831bb3fa0c4a1e5dabf3c4745144333abe66b83e4a370d10b0f82104d8b75"} Feb 02 13:07:51 crc kubenswrapper[4955]: I0202 13:07:51.155691 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" event={"ID":"1a32dd82-8745-4353-bfb4-6e23c6bd784b","Type":"ContainerStarted","Data":"200c0ff3c933cea84711d463035636c41314bf857fd10ef325c3970fbcde3132"} Feb 02 13:07:51 crc kubenswrapper[4955]: I0202 13:07:51.155972 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" Feb 02 13:07:51 crc kubenswrapper[4955]: I0202 13:07:51.167280 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" Feb 02 13:07:51 crc kubenswrapper[4955]: I0202 13:07:51.174075 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" podStartSLOduration=2.174053527 podStartE2EDuration="2.174053527s" podCreationTimestamp="2026-02-02 13:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:07:51.172389893 +0000 UTC m=+322.084726343" watchObservedRunningTime="2026-02-02 13:07:51.174053527 +0000 UTC m=+322.086389987" Feb 02 13:07:51 crc kubenswrapper[4955]: I0202 13:07:51.723655 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97f9b27d-e676-4d60-a308-20069a867cad" path="/var/lib/kubelet/pods/97f9b27d-e676-4d60-a308-20069a867cad/volumes" Feb 02 13:07:51 crc kubenswrapper[4955]: I0202 13:07:51.724367 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9168063-058c-4639-888b-b68045b30091" path="/var/lib/kubelet/pods/d9168063-058c-4639-888b-b68045b30091/volumes" Feb 02 13:07:52 crc kubenswrapper[4955]: I0202 13:07:52.361424 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-776fcd767f-l9vkb"] Feb 02 13:07:52 crc kubenswrapper[4955]: I0202 13:07:52.398762 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm"] Feb 02 13:07:52 crc kubenswrapper[4955]: I0202 13:07:52.399517 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" Feb 02 13:07:52 crc kubenswrapper[4955]: I0202 13:07:52.402523 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 13:07:52 crc kubenswrapper[4955]: I0202 13:07:52.402698 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 13:07:52 crc kubenswrapper[4955]: I0202 13:07:52.402857 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 13:07:52 crc kubenswrapper[4955]: I0202 13:07:52.402973 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 13:07:52 crc kubenswrapper[4955]: I0202 13:07:52.403087 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 13:07:52 crc kubenswrapper[4955]: I0202 13:07:52.403279 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 13:07:52 crc kubenswrapper[4955]: I0202 13:07:52.408637 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm"] Feb 02 13:07:52 crc kubenswrapper[4955]: I0202 13:07:52.502288 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a23a903c-9bd5-49e6-8167-5845b01ece74-config\") pod \"route-controller-manager-76946b564d-zscjm\" (UID: \"a23a903c-9bd5-49e6-8167-5845b01ece74\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" Feb 02 13:07:52 crc kubenswrapper[4955]: I0202 13:07:52.502338 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a23a903c-9bd5-49e6-8167-5845b01ece74-serving-cert\") pod \"route-controller-manager-76946b564d-zscjm\" (UID: \"a23a903c-9bd5-49e6-8167-5845b01ece74\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" Feb 02 13:07:52 crc kubenswrapper[4955]: I0202 13:07:52.502369 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6fst\" (UniqueName: \"kubernetes.io/projected/a23a903c-9bd5-49e6-8167-5845b01ece74-kube-api-access-m6fst\") pod \"route-controller-manager-76946b564d-zscjm\" (UID: \"a23a903c-9bd5-49e6-8167-5845b01ece74\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" Feb 02 13:07:52 crc kubenswrapper[4955]: I0202 13:07:52.502548 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a23a903c-9bd5-49e6-8167-5845b01ece74-client-ca\") pod \"route-controller-manager-76946b564d-zscjm\" (UID: \"a23a903c-9bd5-49e6-8167-5845b01ece74\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" Feb 02 13:07:52 crc kubenswrapper[4955]: I0202 13:07:52.603471 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a23a903c-9bd5-49e6-8167-5845b01ece74-client-ca\") pod \"route-controller-manager-76946b564d-zscjm\" (UID: \"a23a903c-9bd5-49e6-8167-5845b01ece74\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" Feb 02 13:07:52 crc kubenswrapper[4955]: I0202 13:07:52.603546 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a23a903c-9bd5-49e6-8167-5845b01ece74-config\") pod \"route-controller-manager-76946b564d-zscjm\" (UID: \"a23a903c-9bd5-49e6-8167-5845b01ece74\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" Feb 02 13:07:52 crc kubenswrapper[4955]: I0202 13:07:52.603604 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a23a903c-9bd5-49e6-8167-5845b01ece74-serving-cert\") pod \"route-controller-manager-76946b564d-zscjm\" (UID: \"a23a903c-9bd5-49e6-8167-5845b01ece74\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" Feb 02 13:07:52 crc kubenswrapper[4955]: I0202 13:07:52.603645 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6fst\" (UniqueName: \"kubernetes.io/projected/a23a903c-9bd5-49e6-8167-5845b01ece74-kube-api-access-m6fst\") pod \"route-controller-manager-76946b564d-zscjm\" (UID: \"a23a903c-9bd5-49e6-8167-5845b01ece74\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" Feb 02 13:07:52 crc kubenswrapper[4955]: I0202 13:07:52.604624 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a23a903c-9bd5-49e6-8167-5845b01ece74-client-ca\") pod \"route-controller-manager-76946b564d-zscjm\" (UID: \"a23a903c-9bd5-49e6-8167-5845b01ece74\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" Feb 02 13:07:52 crc kubenswrapper[4955]: I0202 13:07:52.604780 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a23a903c-9bd5-49e6-8167-5845b01ece74-config\") pod \"route-controller-manager-76946b564d-zscjm\" (UID: \"a23a903c-9bd5-49e6-8167-5845b01ece74\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" Feb 02 13:07:52 crc kubenswrapper[4955]: I0202 13:07:52.610104 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a23a903c-9bd5-49e6-8167-5845b01ece74-serving-cert\") pod \"route-controller-manager-76946b564d-zscjm\" (UID: \"a23a903c-9bd5-49e6-8167-5845b01ece74\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" Feb 02 13:07:52 crc kubenswrapper[4955]: I0202 13:07:52.621633 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6fst\" (UniqueName: \"kubernetes.io/projected/a23a903c-9bd5-49e6-8167-5845b01ece74-kube-api-access-m6fst\") pod \"route-controller-manager-76946b564d-zscjm\" (UID: \"a23a903c-9bd5-49e6-8167-5845b01ece74\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" Feb 02 13:07:52 crc kubenswrapper[4955]: I0202 13:07:52.715455 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" Feb 02 13:07:53 crc kubenswrapper[4955]: I0202 13:07:53.121372 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm"] Feb 02 13:07:53 crc kubenswrapper[4955]: I0202 13:07:53.174766 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" event={"ID":"a23a903c-9bd5-49e6-8167-5845b01ece74","Type":"ContainerStarted","Data":"2d3513089ab3b490c7649691c136486295fc736ebfb720a67775fc35acdc10f5"} Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.180580 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" event={"ID":"a23a903c-9bd5-49e6-8167-5845b01ece74","Type":"ContainerStarted","Data":"0b026277e33bf4947a893f581314873803e9493ff3bbe51f2f9b3b3d74f93c84"} Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.180724 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" podUID="1a32dd82-8745-4353-bfb4-6e23c6bd784b" containerName="controller-manager" containerID="cri-o://7c0831bb3fa0c4a1e5dabf3c4745144333abe66b83e4a370d10b0f82104d8b75" gracePeriod=30 Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.201591 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" podStartSLOduration=2.201553586 podStartE2EDuration="2.201553586s" podCreationTimestamp="2026-02-02 13:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:07:54.197410736 +0000 UTC m=+325.109747186" watchObservedRunningTime="2026-02-02 13:07:54.201553586 +0000 UTC m=+325.113890036" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.636834 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.664101 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-6dwk2"] Feb 02 13:07:54 crc kubenswrapper[4955]: E0202 13:07:54.664353 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a32dd82-8745-4353-bfb4-6e23c6bd784b" containerName="controller-manager" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.664367 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a32dd82-8745-4353-bfb4-6e23c6bd784b" containerName="controller-manager" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.664505 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a32dd82-8745-4353-bfb4-6e23c6bd784b" containerName="controller-manager" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.665438 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.670148 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-6dwk2"] Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.731368 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a32dd82-8745-4353-bfb4-6e23c6bd784b-config\") pod \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\" (UID: \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\") " Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.731463 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtrgk\" (UniqueName: \"kubernetes.io/projected/1a32dd82-8745-4353-bfb4-6e23c6bd784b-kube-api-access-wtrgk\") pod \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\" (UID: \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\") " Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.731531 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a32dd82-8745-4353-bfb4-6e23c6bd784b-client-ca\") pod \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\" (UID: \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\") " Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.731644 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a32dd82-8745-4353-bfb4-6e23c6bd784b-serving-cert\") pod \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\" (UID: \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\") " Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.731701 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a32dd82-8745-4353-bfb4-6e23c6bd784b-proxy-ca-bundles\") pod \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\" (UID: \"1a32dd82-8745-4353-bfb4-6e23c6bd784b\") " Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.731988 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-config\") pod \"controller-manager-d6f97d578-6dwk2\" (UID: \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\") " pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.732067 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9cgf\" (UniqueName: \"kubernetes.io/projected/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-kube-api-access-r9cgf\") pod \"controller-manager-d6f97d578-6dwk2\" (UID: \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\") " pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.732205 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-serving-cert\") pod \"controller-manager-d6f97d578-6dwk2\" (UID: \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\") " pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.732199 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a32dd82-8745-4353-bfb4-6e23c6bd784b-config" (OuterVolumeSpecName: "config") pod "1a32dd82-8745-4353-bfb4-6e23c6bd784b" (UID: "1a32dd82-8745-4353-bfb4-6e23c6bd784b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.732261 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-client-ca\") pod \"controller-manager-d6f97d578-6dwk2\" (UID: \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\") " pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.732326 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-proxy-ca-bundles\") pod \"controller-manager-d6f97d578-6dwk2\" (UID: \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\") " pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.732401 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a32dd82-8745-4353-bfb4-6e23c6bd784b-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.732462 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a32dd82-8745-4353-bfb4-6e23c6bd784b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1a32dd82-8745-4353-bfb4-6e23c6bd784b" (UID: "1a32dd82-8745-4353-bfb4-6e23c6bd784b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.733169 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a32dd82-8745-4353-bfb4-6e23c6bd784b-client-ca" (OuterVolumeSpecName: "client-ca") pod "1a32dd82-8745-4353-bfb4-6e23c6bd784b" (UID: "1a32dd82-8745-4353-bfb4-6e23c6bd784b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.736706 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a32dd82-8745-4353-bfb4-6e23c6bd784b-kube-api-access-wtrgk" (OuterVolumeSpecName: "kube-api-access-wtrgk") pod "1a32dd82-8745-4353-bfb4-6e23c6bd784b" (UID: "1a32dd82-8745-4353-bfb4-6e23c6bd784b"). InnerVolumeSpecName "kube-api-access-wtrgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.736738 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a32dd82-8745-4353-bfb4-6e23c6bd784b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1a32dd82-8745-4353-bfb4-6e23c6bd784b" (UID: "1a32dd82-8745-4353-bfb4-6e23c6bd784b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.834130 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-config\") pod \"controller-manager-d6f97d578-6dwk2\" (UID: \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\") " pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.834183 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9cgf\" (UniqueName: \"kubernetes.io/projected/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-kube-api-access-r9cgf\") pod \"controller-manager-d6f97d578-6dwk2\" (UID: \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\") " pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.834222 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-serving-cert\") pod \"controller-manager-d6f97d578-6dwk2\" (UID: \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\") " pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.834245 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-client-ca\") pod \"controller-manager-d6f97d578-6dwk2\" (UID: \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\") " pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.834266 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-proxy-ca-bundles\") pod \"controller-manager-d6f97d578-6dwk2\" (UID: \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\") " pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.834432 4955 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a32dd82-8745-4353-bfb4-6e23c6bd784b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.834842 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtrgk\" (UniqueName: \"kubernetes.io/projected/1a32dd82-8745-4353-bfb4-6e23c6bd784b-kube-api-access-wtrgk\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.834872 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a32dd82-8745-4353-bfb4-6e23c6bd784b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.834888 4955 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a32dd82-8745-4353-bfb4-6e23c6bd784b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.835412 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-client-ca\") pod \"controller-manager-d6f97d578-6dwk2\" (UID: \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\") " pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.836219 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-config\") pod \"controller-manager-d6f97d578-6dwk2\" (UID: \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\") " pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.838511 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-proxy-ca-bundles\") pod \"controller-manager-d6f97d578-6dwk2\" (UID: \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\") " pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.840859 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-serving-cert\") pod \"controller-manager-d6f97d578-6dwk2\" (UID: \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\") " pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.860640 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9cgf\" (UniqueName: \"kubernetes.io/projected/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-kube-api-access-r9cgf\") pod \"controller-manager-d6f97d578-6dwk2\" (UID: \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\") " pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" Feb 02 13:07:54 crc kubenswrapper[4955]: I0202 13:07:54.987334 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" Feb 02 13:07:55 crc kubenswrapper[4955]: I0202 13:07:55.154636 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-6dwk2"] Feb 02 13:07:55 crc kubenswrapper[4955]: W0202 13:07:55.158401 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67d67fcd_64a8_4a8c_a2e1_86ebaf82c6ec.slice/crio-5eabf3ceb6e601073468d2cfcd03169765e9cdb2f7e83040d1368aa081ce63ad WatchSource:0}: Error finding container 5eabf3ceb6e601073468d2cfcd03169765e9cdb2f7e83040d1368aa081ce63ad: Status 404 returned error can't find the container with id 5eabf3ceb6e601073468d2cfcd03169765e9cdb2f7e83040d1368aa081ce63ad Feb 02 13:07:55 crc kubenswrapper[4955]: I0202 13:07:55.186241 4955 generic.go:334] "Generic (PLEG): container finished" podID="1a32dd82-8745-4353-bfb4-6e23c6bd784b" containerID="7c0831bb3fa0c4a1e5dabf3c4745144333abe66b83e4a370d10b0f82104d8b75" exitCode=0 Feb 02 13:07:55 crc kubenswrapper[4955]: I0202 13:07:55.186300 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" event={"ID":"1a32dd82-8745-4353-bfb4-6e23c6bd784b","Type":"ContainerDied","Data":"7c0831bb3fa0c4a1e5dabf3c4745144333abe66b83e4a370d10b0f82104d8b75"} Feb 02 13:07:55 crc kubenswrapper[4955]: I0202 13:07:55.186325 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" event={"ID":"1a32dd82-8745-4353-bfb4-6e23c6bd784b","Type":"ContainerDied","Data":"200c0ff3c933cea84711d463035636c41314bf857fd10ef325c3970fbcde3132"} Feb 02 13:07:55 crc kubenswrapper[4955]: I0202 13:07:55.186341 4955 scope.go:117] "RemoveContainer" containerID="7c0831bb3fa0c4a1e5dabf3c4745144333abe66b83e4a370d10b0f82104d8b75" Feb 02 13:07:55 crc kubenswrapper[4955]: I0202 13:07:55.186404 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-776fcd767f-l9vkb" Feb 02 13:07:55 crc kubenswrapper[4955]: I0202 13:07:55.190420 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" event={"ID":"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec","Type":"ContainerStarted","Data":"5eabf3ceb6e601073468d2cfcd03169765e9cdb2f7e83040d1368aa081ce63ad"} Feb 02 13:07:55 crc kubenswrapper[4955]: I0202 13:07:55.190663 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" Feb 02 13:07:55 crc kubenswrapper[4955]: I0202 13:07:55.194607 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" Feb 02 13:07:55 crc kubenswrapper[4955]: I0202 13:07:55.227723 4955 scope.go:117] "RemoveContainer" containerID="7c0831bb3fa0c4a1e5dabf3c4745144333abe66b83e4a370d10b0f82104d8b75" Feb 02 13:07:55 crc kubenswrapper[4955]: E0202 13:07:55.228928 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c0831bb3fa0c4a1e5dabf3c4745144333abe66b83e4a370d10b0f82104d8b75\": container with ID starting with 7c0831bb3fa0c4a1e5dabf3c4745144333abe66b83e4a370d10b0f82104d8b75 not found: ID does not exist" containerID="7c0831bb3fa0c4a1e5dabf3c4745144333abe66b83e4a370d10b0f82104d8b75" Feb 02 13:07:55 crc kubenswrapper[4955]: I0202 13:07:55.228970 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c0831bb3fa0c4a1e5dabf3c4745144333abe66b83e4a370d10b0f82104d8b75"} err="failed to get container status \"7c0831bb3fa0c4a1e5dabf3c4745144333abe66b83e4a370d10b0f82104d8b75\": rpc error: code = NotFound desc = could not find container \"7c0831bb3fa0c4a1e5dabf3c4745144333abe66b83e4a370d10b0f82104d8b75\": container with ID starting with 7c0831bb3fa0c4a1e5dabf3c4745144333abe66b83e4a370d10b0f82104d8b75 not found: ID does not exist" Feb 02 13:07:55 crc kubenswrapper[4955]: I0202 13:07:55.238651 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-776fcd767f-l9vkb"] Feb 02 13:07:55 crc kubenswrapper[4955]: I0202 13:07:55.242359 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-776fcd767f-l9vkb"] Feb 02 13:07:55 crc kubenswrapper[4955]: I0202 13:07:55.722329 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a32dd82-8745-4353-bfb4-6e23c6bd784b" path="/var/lib/kubelet/pods/1a32dd82-8745-4353-bfb4-6e23c6bd784b/volumes" Feb 02 13:07:56 crc kubenswrapper[4955]: I0202 13:07:56.196210 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" event={"ID":"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec","Type":"ContainerStarted","Data":"8973eb1d0c0777ecce6cc40378ac8a72c38fc70d1af7828fc1c325c5366ce3e7"} Feb 02 13:07:56 crc kubenswrapper[4955]: I0202 13:07:56.196528 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" Feb 02 13:07:56 crc kubenswrapper[4955]: I0202 13:07:56.201306 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" Feb 02 13:07:56 crc kubenswrapper[4955]: I0202 13:07:56.243657 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" podStartSLOduration=4.243634297 podStartE2EDuration="4.243634297s" podCreationTimestamp="2026-02-02 13:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:07:56.214632791 +0000 UTC m=+327.126969311" watchObservedRunningTime="2026-02-02 13:07:56.243634297 +0000 UTC m=+327.155970777" Feb 02 13:07:57 crc kubenswrapper[4955]: I0202 13:07:57.696013 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kxp4p"] Feb 02 13:07:57 crc kubenswrapper[4955]: I0202 13:07:57.697161 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kxp4p" Feb 02 13:07:57 crc kubenswrapper[4955]: I0202 13:07:57.702400 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 13:07:57 crc kubenswrapper[4955]: I0202 13:07:57.722217 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kxp4p"] Feb 02 13:07:57 crc kubenswrapper[4955]: I0202 13:07:57.775251 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99aa1753-769d-42bf-92e4-751a1dfdf0be-catalog-content\") pod \"community-operators-kxp4p\" (UID: \"99aa1753-769d-42bf-92e4-751a1dfdf0be\") " pod="openshift-marketplace/community-operators-kxp4p" Feb 02 13:07:57 crc kubenswrapper[4955]: I0202 13:07:57.776039 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99aa1753-769d-42bf-92e4-751a1dfdf0be-utilities\") pod \"community-operators-kxp4p\" (UID: \"99aa1753-769d-42bf-92e4-751a1dfdf0be\") " pod="openshift-marketplace/community-operators-kxp4p" Feb 02 13:07:57 crc kubenswrapper[4955]: I0202 13:07:57.776111 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p4sr\" (UniqueName: \"kubernetes.io/projected/99aa1753-769d-42bf-92e4-751a1dfdf0be-kube-api-access-4p4sr\") pod \"community-operators-kxp4p\" (UID: \"99aa1753-769d-42bf-92e4-751a1dfdf0be\") " pod="openshift-marketplace/community-operators-kxp4p" Feb 02 13:07:57 crc kubenswrapper[4955]: I0202 13:07:57.877060 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99aa1753-769d-42bf-92e4-751a1dfdf0be-catalog-content\") pod \"community-operators-kxp4p\" (UID: \"99aa1753-769d-42bf-92e4-751a1dfdf0be\") " pod="openshift-marketplace/community-operators-kxp4p" Feb 02 13:07:57 crc kubenswrapper[4955]: I0202 13:07:57.877139 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99aa1753-769d-42bf-92e4-751a1dfdf0be-utilities\") pod \"community-operators-kxp4p\" (UID: \"99aa1753-769d-42bf-92e4-751a1dfdf0be\") " pod="openshift-marketplace/community-operators-kxp4p" Feb 02 13:07:57 crc kubenswrapper[4955]: I0202 13:07:57.877161 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p4sr\" (UniqueName: \"kubernetes.io/projected/99aa1753-769d-42bf-92e4-751a1dfdf0be-kube-api-access-4p4sr\") pod \"community-operators-kxp4p\" (UID: \"99aa1753-769d-42bf-92e4-751a1dfdf0be\") " pod="openshift-marketplace/community-operators-kxp4p" Feb 02 13:07:57 crc kubenswrapper[4955]: I0202 13:07:57.877573 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99aa1753-769d-42bf-92e4-751a1dfdf0be-catalog-content\") pod \"community-operators-kxp4p\" (UID: \"99aa1753-769d-42bf-92e4-751a1dfdf0be\") " pod="openshift-marketplace/community-operators-kxp4p" Feb 02 13:07:57 crc kubenswrapper[4955]: I0202 13:07:57.877632 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99aa1753-769d-42bf-92e4-751a1dfdf0be-utilities\") pod \"community-operators-kxp4p\" (UID: \"99aa1753-769d-42bf-92e4-751a1dfdf0be\") " pod="openshift-marketplace/community-operators-kxp4p" Feb 02 13:07:57 crc kubenswrapper[4955]: I0202 13:07:57.906421 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p4sr\" (UniqueName: \"kubernetes.io/projected/99aa1753-769d-42bf-92e4-751a1dfdf0be-kube-api-access-4p4sr\") pod \"community-operators-kxp4p\" (UID: \"99aa1753-769d-42bf-92e4-751a1dfdf0be\") " pod="openshift-marketplace/community-operators-kxp4p" Feb 02 13:07:58 crc kubenswrapper[4955]: I0202 13:07:58.012744 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kxp4p" Feb 02 13:07:58 crc kubenswrapper[4955]: I0202 13:07:58.418265 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kxp4p"] Feb 02 13:07:58 crc kubenswrapper[4955]: W0202 13:07:58.430174 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99aa1753_769d_42bf_92e4_751a1dfdf0be.slice/crio-95a7950ca0d200c3b082b3abd8795efaabb3b768bc4906e6d49990923980a562 WatchSource:0}: Error finding container 95a7950ca0d200c3b082b3abd8795efaabb3b768bc4906e6d49990923980a562: Status 404 returned error can't find the container with id 95a7950ca0d200c3b082b3abd8795efaabb3b768bc4906e6d49990923980a562 Feb 02 13:07:59 crc kubenswrapper[4955]: I0202 13:07:59.211764 4955 generic.go:334] "Generic (PLEG): container finished" podID="99aa1753-769d-42bf-92e4-751a1dfdf0be" containerID="cd03435f3f5785558198bfd039bc27ddf198b5f2b53519d1ea51afda77b36ae8" exitCode=0 Feb 02 13:07:59 crc kubenswrapper[4955]: I0202 13:07:59.212063 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxp4p" event={"ID":"99aa1753-769d-42bf-92e4-751a1dfdf0be","Type":"ContainerDied","Data":"cd03435f3f5785558198bfd039bc27ddf198b5f2b53519d1ea51afda77b36ae8"} Feb 02 13:07:59 crc kubenswrapper[4955]: I0202 13:07:59.212085 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxp4p" event={"ID":"99aa1753-769d-42bf-92e4-751a1dfdf0be","Type":"ContainerStarted","Data":"95a7950ca0d200c3b082b3abd8795efaabb3b768bc4906e6d49990923980a562"} Feb 02 13:07:59 crc kubenswrapper[4955]: I0202 13:07:59.883135 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-24bdn"] Feb 02 13:07:59 crc kubenswrapper[4955]: I0202 13:07:59.884204 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-24bdn" Feb 02 13:07:59 crc kubenswrapper[4955]: I0202 13:07:59.887313 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 13:07:59 crc kubenswrapper[4955]: I0202 13:07:59.897357 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-24bdn"] Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.005383 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a77ac3de-f22e-420e-8fc2-167c9433d128-utilities\") pod \"redhat-marketplace-24bdn\" (UID: \"a77ac3de-f22e-420e-8fc2-167c9433d128\") " pod="openshift-marketplace/redhat-marketplace-24bdn" Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.005441 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mrf8\" (UniqueName: \"kubernetes.io/projected/a77ac3de-f22e-420e-8fc2-167c9433d128-kube-api-access-2mrf8\") pod \"redhat-marketplace-24bdn\" (UID: \"a77ac3de-f22e-420e-8fc2-167c9433d128\") " pod="openshift-marketplace/redhat-marketplace-24bdn" Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.005766 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a77ac3de-f22e-420e-8fc2-167c9433d128-catalog-content\") pod \"redhat-marketplace-24bdn\" (UID: \"a77ac3de-f22e-420e-8fc2-167c9433d128\") " pod="openshift-marketplace/redhat-marketplace-24bdn" Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.086532 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qgd84"] Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.087445 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qgd84" Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.089675 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.096794 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qgd84"] Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.107048 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a77ac3de-f22e-420e-8fc2-167c9433d128-catalog-content\") pod \"redhat-marketplace-24bdn\" (UID: \"a77ac3de-f22e-420e-8fc2-167c9433d128\") " pod="openshift-marketplace/redhat-marketplace-24bdn" Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.107087 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a77ac3de-f22e-420e-8fc2-167c9433d128-utilities\") pod \"redhat-marketplace-24bdn\" (UID: \"a77ac3de-f22e-420e-8fc2-167c9433d128\") " pod="openshift-marketplace/redhat-marketplace-24bdn" Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.107106 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mrf8\" (UniqueName: \"kubernetes.io/projected/a77ac3de-f22e-420e-8fc2-167c9433d128-kube-api-access-2mrf8\") pod \"redhat-marketplace-24bdn\" (UID: \"a77ac3de-f22e-420e-8fc2-167c9433d128\") " pod="openshift-marketplace/redhat-marketplace-24bdn" Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.107739 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a77ac3de-f22e-420e-8fc2-167c9433d128-utilities\") pod \"redhat-marketplace-24bdn\" (UID: \"a77ac3de-f22e-420e-8fc2-167c9433d128\") " pod="openshift-marketplace/redhat-marketplace-24bdn" Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.107761 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a77ac3de-f22e-420e-8fc2-167c9433d128-catalog-content\") pod \"redhat-marketplace-24bdn\" (UID: \"a77ac3de-f22e-420e-8fc2-167c9433d128\") " pod="openshift-marketplace/redhat-marketplace-24bdn" Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.128093 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mrf8\" (UniqueName: \"kubernetes.io/projected/a77ac3de-f22e-420e-8fc2-167c9433d128-kube-api-access-2mrf8\") pod \"redhat-marketplace-24bdn\" (UID: \"a77ac3de-f22e-420e-8fc2-167c9433d128\") " pod="openshift-marketplace/redhat-marketplace-24bdn" Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.204122 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-24bdn" Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.207832 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rlgh\" (UniqueName: \"kubernetes.io/projected/ec7a8d60-ca53-4b69-aa03-06fdede2ae9a-kube-api-access-2rlgh\") pod \"redhat-operators-qgd84\" (UID: \"ec7a8d60-ca53-4b69-aa03-06fdede2ae9a\") " pod="openshift-marketplace/redhat-operators-qgd84" Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.207885 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec7a8d60-ca53-4b69-aa03-06fdede2ae9a-catalog-content\") pod \"redhat-operators-qgd84\" (UID: \"ec7a8d60-ca53-4b69-aa03-06fdede2ae9a\") " pod="openshift-marketplace/redhat-operators-qgd84" Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.207908 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec7a8d60-ca53-4b69-aa03-06fdede2ae9a-utilities\") pod \"redhat-operators-qgd84\" (UID: \"ec7a8d60-ca53-4b69-aa03-06fdede2ae9a\") " pod="openshift-marketplace/redhat-operators-qgd84" Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.309296 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rlgh\" (UniqueName: \"kubernetes.io/projected/ec7a8d60-ca53-4b69-aa03-06fdede2ae9a-kube-api-access-2rlgh\") pod \"redhat-operators-qgd84\" (UID: \"ec7a8d60-ca53-4b69-aa03-06fdede2ae9a\") " pod="openshift-marketplace/redhat-operators-qgd84" Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.309343 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec7a8d60-ca53-4b69-aa03-06fdede2ae9a-catalog-content\") pod \"redhat-operators-qgd84\" (UID: \"ec7a8d60-ca53-4b69-aa03-06fdede2ae9a\") " pod="openshift-marketplace/redhat-operators-qgd84" Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.309363 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec7a8d60-ca53-4b69-aa03-06fdede2ae9a-utilities\") pod \"redhat-operators-qgd84\" (UID: \"ec7a8d60-ca53-4b69-aa03-06fdede2ae9a\") " pod="openshift-marketplace/redhat-operators-qgd84" Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.310046 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec7a8d60-ca53-4b69-aa03-06fdede2ae9a-utilities\") pod \"redhat-operators-qgd84\" (UID: \"ec7a8d60-ca53-4b69-aa03-06fdede2ae9a\") " pod="openshift-marketplace/redhat-operators-qgd84" Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.310110 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec7a8d60-ca53-4b69-aa03-06fdede2ae9a-catalog-content\") pod \"redhat-operators-qgd84\" (UID: \"ec7a8d60-ca53-4b69-aa03-06fdede2ae9a\") " pod="openshift-marketplace/redhat-operators-qgd84" Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.326832 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rlgh\" (UniqueName: \"kubernetes.io/projected/ec7a8d60-ca53-4b69-aa03-06fdede2ae9a-kube-api-access-2rlgh\") pod \"redhat-operators-qgd84\" (UID: \"ec7a8d60-ca53-4b69-aa03-06fdede2ae9a\") " pod="openshift-marketplace/redhat-operators-qgd84" Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.401916 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qgd84" Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.625037 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-24bdn"] Feb 02 13:08:00 crc kubenswrapper[4955]: W0202 13:08:00.632487 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda77ac3de_f22e_420e_8fc2_167c9433d128.slice/crio-36dd6594b775506c522cb3a6ff2fa30785181d67a88d00ac916129155f9be3f7 WatchSource:0}: Error finding container 36dd6594b775506c522cb3a6ff2fa30785181d67a88d00ac916129155f9be3f7: Status 404 returned error can't find the container with id 36dd6594b775506c522cb3a6ff2fa30785181d67a88d00ac916129155f9be3f7 Feb 02 13:08:00 crc kubenswrapper[4955]: I0202 13:08:00.776960 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qgd84"] Feb 02 13:08:00 crc kubenswrapper[4955]: W0202 13:08:00.821408 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec7a8d60_ca53_4b69_aa03_06fdede2ae9a.slice/crio-4260b269c7a5e14c3d3b9f7dfbe38cde8fde8d7cdfbd160cde62a87564ca4fe1 WatchSource:0}: Error finding container 4260b269c7a5e14c3d3b9f7dfbe38cde8fde8d7cdfbd160cde62a87564ca4fe1: Status 404 returned error can't find the container with id 4260b269c7a5e14c3d3b9f7dfbe38cde8fde8d7cdfbd160cde62a87564ca4fe1 Feb 02 13:08:01 crc kubenswrapper[4955]: I0202 13:08:01.222661 4955 generic.go:334] "Generic (PLEG): container finished" podID="a77ac3de-f22e-420e-8fc2-167c9433d128" containerID="d13ed4499e6e20715a41d3ec76ac1047bee0bef0ed131cb35a4861f781e8c09f" exitCode=0 Feb 02 13:08:01 crc kubenswrapper[4955]: I0202 13:08:01.222732 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24bdn" event={"ID":"a77ac3de-f22e-420e-8fc2-167c9433d128","Type":"ContainerDied","Data":"d13ed4499e6e20715a41d3ec76ac1047bee0bef0ed131cb35a4861f781e8c09f"} Feb 02 13:08:01 crc kubenswrapper[4955]: I0202 13:08:01.222764 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24bdn" event={"ID":"a77ac3de-f22e-420e-8fc2-167c9433d128","Type":"ContainerStarted","Data":"36dd6594b775506c522cb3a6ff2fa30785181d67a88d00ac916129155f9be3f7"} Feb 02 13:08:01 crc kubenswrapper[4955]: I0202 13:08:01.224404 4955 generic.go:334] "Generic (PLEG): container finished" podID="99aa1753-769d-42bf-92e4-751a1dfdf0be" containerID="ee00d3793c9b8efde01c20a255dacc45bbfc99dd5b3ba9ebc09e2eee154d93bb" exitCode=0 Feb 02 13:08:01 crc kubenswrapper[4955]: I0202 13:08:01.224496 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxp4p" event={"ID":"99aa1753-769d-42bf-92e4-751a1dfdf0be","Type":"ContainerDied","Data":"ee00d3793c9b8efde01c20a255dacc45bbfc99dd5b3ba9ebc09e2eee154d93bb"} Feb 02 13:08:01 crc kubenswrapper[4955]: I0202 13:08:01.226265 4955 generic.go:334] "Generic (PLEG): container finished" podID="ec7a8d60-ca53-4b69-aa03-06fdede2ae9a" containerID="8008f9b1f133155da38b2dc73bfa364941972d11fb7b7a1473e38ac9c4e7c253" exitCode=0 Feb 02 13:08:01 crc kubenswrapper[4955]: I0202 13:08:01.226320 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgd84" event={"ID":"ec7a8d60-ca53-4b69-aa03-06fdede2ae9a","Type":"ContainerDied","Data":"8008f9b1f133155da38b2dc73bfa364941972d11fb7b7a1473e38ac9c4e7c253"} Feb 02 13:08:01 crc kubenswrapper[4955]: I0202 13:08:01.226350 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgd84" event={"ID":"ec7a8d60-ca53-4b69-aa03-06fdede2ae9a","Type":"ContainerStarted","Data":"4260b269c7a5e14c3d3b9f7dfbe38cde8fde8d7cdfbd160cde62a87564ca4fe1"} Feb 02 13:08:02 crc kubenswrapper[4955]: I0202 13:08:02.232448 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxp4p" event={"ID":"99aa1753-769d-42bf-92e4-751a1dfdf0be","Type":"ContainerStarted","Data":"395faf65b8243dfe89c621209127bdca33fa3f4ca4f2aa14109f0c5f57c610b9"} Feb 02 13:08:02 crc kubenswrapper[4955]: I0202 13:08:02.254666 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kxp4p" podStartSLOduration=2.593703651 podStartE2EDuration="5.25464293s" podCreationTimestamp="2026-02-02 13:07:57 +0000 UTC" firstStartedPulling="2026-02-02 13:07:59.213658932 +0000 UTC m=+330.125995382" lastFinishedPulling="2026-02-02 13:08:01.874598211 +0000 UTC m=+332.786934661" observedRunningTime="2026-02-02 13:08:02.249061406 +0000 UTC m=+333.161397866" watchObservedRunningTime="2026-02-02 13:08:02.25464293 +0000 UTC m=+333.166979380" Feb 02 13:08:03 crc kubenswrapper[4955]: I0202 13:08:03.017282 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:08:03 crc kubenswrapper[4955]: I0202 13:08:03.017637 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:08:03 crc kubenswrapper[4955]: I0202 13:08:03.238099 4955 generic.go:334] "Generic (PLEG): container finished" podID="a77ac3de-f22e-420e-8fc2-167c9433d128" containerID="51b85f477cd244777fde84d66365b7284175a584accd2b7fdc1da1b66037bd7e" exitCode=0 Feb 02 13:08:03 crc kubenswrapper[4955]: I0202 13:08:03.238209 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24bdn" event={"ID":"a77ac3de-f22e-420e-8fc2-167c9433d128","Type":"ContainerDied","Data":"51b85f477cd244777fde84d66365b7284175a584accd2b7fdc1da1b66037bd7e"} Feb 02 13:08:03 crc kubenswrapper[4955]: I0202 13:08:03.240312 4955 generic.go:334] "Generic (PLEG): container finished" podID="ec7a8d60-ca53-4b69-aa03-06fdede2ae9a" containerID="f06c036c020e411f58579ae9a7c7c29712d07d2e570b3a905e923085026f0e8c" exitCode=0 Feb 02 13:08:03 crc kubenswrapper[4955]: I0202 13:08:03.240641 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgd84" event={"ID":"ec7a8d60-ca53-4b69-aa03-06fdede2ae9a","Type":"ContainerDied","Data":"f06c036c020e411f58579ae9a7c7c29712d07d2e570b3a905e923085026f0e8c"} Feb 02 13:08:04 crc kubenswrapper[4955]: I0202 13:08:04.258996 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24bdn" event={"ID":"a77ac3de-f22e-420e-8fc2-167c9433d128","Type":"ContainerStarted","Data":"8fc86af1e62bffdc83ee156efd8abd92cd3c41814467bee35edc5b6ed9a46155"} Feb 02 13:08:04 crc kubenswrapper[4955]: I0202 13:08:04.264687 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgd84" event={"ID":"ec7a8d60-ca53-4b69-aa03-06fdede2ae9a","Type":"ContainerStarted","Data":"4846e0d9474430287b49b23e9c34b6ff4156bfb57fb3493d3f96fd19e2d538e1"} Feb 02 13:08:04 crc kubenswrapper[4955]: I0202 13:08:04.278218 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-24bdn" podStartSLOduration=2.875734918 podStartE2EDuration="5.278192548s" podCreationTimestamp="2026-02-02 13:07:59 +0000 UTC" firstStartedPulling="2026-02-02 13:08:01.225683073 +0000 UTC m=+332.138019523" lastFinishedPulling="2026-02-02 13:08:03.628140703 +0000 UTC m=+334.540477153" observedRunningTime="2026-02-02 13:08:04.274438948 +0000 UTC m=+335.186775448" watchObservedRunningTime="2026-02-02 13:08:04.278192548 +0000 UTC m=+335.190529008" Feb 02 13:08:04 crc kubenswrapper[4955]: I0202 13:08:04.295601 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qgd84" podStartSLOduration=1.667670427 podStartE2EDuration="4.295579565s" podCreationTimestamp="2026-02-02 13:08:00 +0000 UTC" firstStartedPulling="2026-02-02 13:08:01.22805483 +0000 UTC m=+332.140391300" lastFinishedPulling="2026-02-02 13:08:03.855963988 +0000 UTC m=+334.768300438" observedRunningTime="2026-02-02 13:08:04.290844942 +0000 UTC m=+335.203181422" watchObservedRunningTime="2026-02-02 13:08:04.295579565 +0000 UTC m=+335.207916025" Feb 02 13:08:08 crc kubenswrapper[4955]: I0202 13:08:08.012914 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kxp4p" Feb 02 13:08:08 crc kubenswrapper[4955]: I0202 13:08:08.013418 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kxp4p" Feb 02 13:08:08 crc kubenswrapper[4955]: I0202 13:08:08.052970 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kxp4p" Feb 02 13:08:08 crc kubenswrapper[4955]: I0202 13:08:08.319598 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kxp4p" Feb 02 13:08:10 crc kubenswrapper[4955]: I0202 13:08:10.204604 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-24bdn" Feb 02 13:08:10 crc kubenswrapper[4955]: I0202 13:08:10.204870 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-24bdn" Feb 02 13:08:10 crc kubenswrapper[4955]: I0202 13:08:10.251413 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-24bdn" Feb 02 13:08:10 crc kubenswrapper[4955]: I0202 13:08:10.339291 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-24bdn" Feb 02 13:08:10 crc kubenswrapper[4955]: I0202 13:08:10.403996 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qgd84" Feb 02 13:08:10 crc kubenswrapper[4955]: I0202 13:08:10.404402 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qgd84" Feb 02 13:08:10 crc kubenswrapper[4955]: I0202 13:08:10.444353 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qgd84" Feb 02 13:08:11 crc kubenswrapper[4955]: I0202 13:08:11.361140 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qgd84" Feb 02 13:08:27 crc kubenswrapper[4955]: I0202 13:08:27.099407 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-6dwk2"] Feb 02 13:08:27 crc kubenswrapper[4955]: I0202 13:08:27.100305 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" podUID="67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec" containerName="controller-manager" containerID="cri-o://8973eb1d0c0777ecce6cc40378ac8a72c38fc70d1af7828fc1c325c5366ce3e7" gracePeriod=30 Feb 02 13:08:27 crc kubenswrapper[4955]: I0202 13:08:27.379777 4955 generic.go:334] "Generic (PLEG): container finished" podID="67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec" containerID="8973eb1d0c0777ecce6cc40378ac8a72c38fc70d1af7828fc1c325c5366ce3e7" exitCode=0 Feb 02 13:08:27 crc kubenswrapper[4955]: I0202 13:08:27.379850 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" event={"ID":"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec","Type":"ContainerDied","Data":"8973eb1d0c0777ecce6cc40378ac8a72c38fc70d1af7828fc1c325c5366ce3e7"} Feb 02 13:08:27 crc kubenswrapper[4955]: I0202 13:08:27.601109 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" Feb 02 13:08:27 crc kubenswrapper[4955]: I0202 13:08:27.633159 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-client-ca\") pod \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\" (UID: \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\") " Feb 02 13:08:27 crc kubenswrapper[4955]: I0202 13:08:27.633272 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-serving-cert\") pod \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\" (UID: \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\") " Feb 02 13:08:27 crc kubenswrapper[4955]: I0202 13:08:27.633352 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-config\") pod \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\" (UID: \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\") " Feb 02 13:08:27 crc kubenswrapper[4955]: I0202 13:08:27.633384 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-proxy-ca-bundles\") pod \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\" (UID: \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\") " Feb 02 13:08:27 crc kubenswrapper[4955]: I0202 13:08:27.633429 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9cgf\" (UniqueName: \"kubernetes.io/projected/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-kube-api-access-r9cgf\") pod \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\" (UID: \"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec\") " Feb 02 13:08:27 crc kubenswrapper[4955]: I0202 13:08:27.635382 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec" (UID: "67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:08:27 crc kubenswrapper[4955]: I0202 13:08:27.635482 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-client-ca" (OuterVolumeSpecName: "client-ca") pod "67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec" (UID: "67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:08:27 crc kubenswrapper[4955]: I0202 13:08:27.635547 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-config" (OuterVolumeSpecName: "config") pod "67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec" (UID: "67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:08:27 crc kubenswrapper[4955]: I0202 13:08:27.639774 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec" (UID: "67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:08:27 crc kubenswrapper[4955]: I0202 13:08:27.639801 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-kube-api-access-r9cgf" (OuterVolumeSpecName: "kube-api-access-r9cgf") pod "67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec" (UID: "67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec"). InnerVolumeSpecName "kube-api-access-r9cgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:08:27 crc kubenswrapper[4955]: I0202 13:08:27.734312 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9cgf\" (UniqueName: \"kubernetes.io/projected/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-kube-api-access-r9cgf\") on node \"crc\" DevicePath \"\"" Feb 02 13:08:27 crc kubenswrapper[4955]: I0202 13:08:27.734350 4955 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:08:27 crc kubenswrapper[4955]: I0202 13:08:27.734365 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:08:27 crc kubenswrapper[4955]: I0202 13:08:27.734378 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:08:27 crc kubenswrapper[4955]: I0202 13:08:27.734391 4955 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.386232 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" event={"ID":"67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec","Type":"ContainerDied","Data":"5eabf3ceb6e601073468d2cfcd03169765e9cdb2f7e83040d1368aa081ce63ad"} Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.386646 4955 scope.go:117] "RemoveContainer" containerID="8973eb1d0c0777ecce6cc40378ac8a72c38fc70d1af7828fc1c325c5366ce3e7" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.386643 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-6dwk2" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.405676 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-6dwk2"] Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.408903 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-6dwk2"] Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.838099 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-776fcd767f-bjn9q"] Feb 02 13:08:28 crc kubenswrapper[4955]: E0202 13:08:28.838303 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec" containerName="controller-manager" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.838315 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec" containerName="controller-manager" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.838392 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec" containerName="controller-manager" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.838773 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-776fcd767f-bjn9q" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.841091 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.841152 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.841475 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.842566 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.842661 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.843356 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.846857 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.850626 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-776fcd767f-bjn9q"] Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.863456 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b1929db-8e43-4ca3-a320-a535b571d0f7-config\") pod \"controller-manager-776fcd767f-bjn9q\" (UID: \"2b1929db-8e43-4ca3-a320-a535b571d0f7\") " pod="openshift-controller-manager/controller-manager-776fcd767f-bjn9q" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.863549 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8r87\" (UniqueName: \"kubernetes.io/projected/2b1929db-8e43-4ca3-a320-a535b571d0f7-kube-api-access-w8r87\") pod \"controller-manager-776fcd767f-bjn9q\" (UID: \"2b1929db-8e43-4ca3-a320-a535b571d0f7\") " pod="openshift-controller-manager/controller-manager-776fcd767f-bjn9q" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.863608 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b1929db-8e43-4ca3-a320-a535b571d0f7-client-ca\") pod \"controller-manager-776fcd767f-bjn9q\" (UID: \"2b1929db-8e43-4ca3-a320-a535b571d0f7\") " pod="openshift-controller-manager/controller-manager-776fcd767f-bjn9q" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.863631 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b1929db-8e43-4ca3-a320-a535b571d0f7-proxy-ca-bundles\") pod \"controller-manager-776fcd767f-bjn9q\" (UID: \"2b1929db-8e43-4ca3-a320-a535b571d0f7\") " pod="openshift-controller-manager/controller-manager-776fcd767f-bjn9q" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.863670 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b1929db-8e43-4ca3-a320-a535b571d0f7-serving-cert\") pod \"controller-manager-776fcd767f-bjn9q\" (UID: \"2b1929db-8e43-4ca3-a320-a535b571d0f7\") " pod="openshift-controller-manager/controller-manager-776fcd767f-bjn9q" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.965206 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b1929db-8e43-4ca3-a320-a535b571d0f7-client-ca\") pod \"controller-manager-776fcd767f-bjn9q\" (UID: \"2b1929db-8e43-4ca3-a320-a535b571d0f7\") " pod="openshift-controller-manager/controller-manager-776fcd767f-bjn9q" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.965250 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b1929db-8e43-4ca3-a320-a535b571d0f7-proxy-ca-bundles\") pod \"controller-manager-776fcd767f-bjn9q\" (UID: \"2b1929db-8e43-4ca3-a320-a535b571d0f7\") " pod="openshift-controller-manager/controller-manager-776fcd767f-bjn9q" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.965289 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b1929db-8e43-4ca3-a320-a535b571d0f7-serving-cert\") pod \"controller-manager-776fcd767f-bjn9q\" (UID: \"2b1929db-8e43-4ca3-a320-a535b571d0f7\") " pod="openshift-controller-manager/controller-manager-776fcd767f-bjn9q" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.965325 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b1929db-8e43-4ca3-a320-a535b571d0f7-config\") pod \"controller-manager-776fcd767f-bjn9q\" (UID: \"2b1929db-8e43-4ca3-a320-a535b571d0f7\") " pod="openshift-controller-manager/controller-manager-776fcd767f-bjn9q" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.965373 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8r87\" (UniqueName: \"kubernetes.io/projected/2b1929db-8e43-4ca3-a320-a535b571d0f7-kube-api-access-w8r87\") pod \"controller-manager-776fcd767f-bjn9q\" (UID: \"2b1929db-8e43-4ca3-a320-a535b571d0f7\") " pod="openshift-controller-manager/controller-manager-776fcd767f-bjn9q" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.966526 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b1929db-8e43-4ca3-a320-a535b571d0f7-client-ca\") pod \"controller-manager-776fcd767f-bjn9q\" (UID: \"2b1929db-8e43-4ca3-a320-a535b571d0f7\") " pod="openshift-controller-manager/controller-manager-776fcd767f-bjn9q" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.967233 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b1929db-8e43-4ca3-a320-a535b571d0f7-proxy-ca-bundles\") pod \"controller-manager-776fcd767f-bjn9q\" (UID: \"2b1929db-8e43-4ca3-a320-a535b571d0f7\") " pod="openshift-controller-manager/controller-manager-776fcd767f-bjn9q" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.967254 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b1929db-8e43-4ca3-a320-a535b571d0f7-config\") pod \"controller-manager-776fcd767f-bjn9q\" (UID: \"2b1929db-8e43-4ca3-a320-a535b571d0f7\") " pod="openshift-controller-manager/controller-manager-776fcd767f-bjn9q" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.974782 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b1929db-8e43-4ca3-a320-a535b571d0f7-serving-cert\") pod \"controller-manager-776fcd767f-bjn9q\" (UID: \"2b1929db-8e43-4ca3-a320-a535b571d0f7\") " pod="openshift-controller-manager/controller-manager-776fcd767f-bjn9q" Feb 02 13:08:28 crc kubenswrapper[4955]: I0202 13:08:28.982415 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8r87\" (UniqueName: \"kubernetes.io/projected/2b1929db-8e43-4ca3-a320-a535b571d0f7-kube-api-access-w8r87\") pod \"controller-manager-776fcd767f-bjn9q\" (UID: \"2b1929db-8e43-4ca3-a320-a535b571d0f7\") " pod="openshift-controller-manager/controller-manager-776fcd767f-bjn9q" Feb 02 13:08:29 crc kubenswrapper[4955]: I0202 13:08:29.156745 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-776fcd767f-bjn9q" Feb 02 13:08:29 crc kubenswrapper[4955]: I0202 13:08:29.540332 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-776fcd767f-bjn9q"] Feb 02 13:08:29 crc kubenswrapper[4955]: I0202 13:08:29.724709 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec" path="/var/lib/kubelet/pods/67d67fcd-64a8-4a8c-a2e1-86ebaf82c6ec/volumes" Feb 02 13:08:30 crc kubenswrapper[4955]: I0202 13:08:30.400321 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-776fcd767f-bjn9q" event={"ID":"2b1929db-8e43-4ca3-a320-a535b571d0f7","Type":"ContainerStarted","Data":"45fc7695571dfeee2f0fe06d62786fd3eb66ba579b0dd3a8624d6d1a05589874"} Feb 02 13:08:30 crc kubenswrapper[4955]: I0202 13:08:30.400364 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-776fcd767f-bjn9q" event={"ID":"2b1929db-8e43-4ca3-a320-a535b571d0f7","Type":"ContainerStarted","Data":"85421a4d8905ab797e5fe17a4f1458a0402360be86e84bb416c6352bf9eb5d75"} Feb 02 13:08:30 crc kubenswrapper[4955]: I0202 13:08:30.400633 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-776fcd767f-bjn9q" Feb 02 13:08:30 crc kubenswrapper[4955]: I0202 13:08:30.405538 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-776fcd767f-bjn9q" Feb 02 13:08:30 crc kubenswrapper[4955]: I0202 13:08:30.421753 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-776fcd767f-bjn9q" podStartSLOduration=3.421726733 podStartE2EDuration="3.421726733s" podCreationTimestamp="2026-02-02 13:08:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:08:30.416717132 +0000 UTC m=+361.329053582" watchObservedRunningTime="2026-02-02 13:08:30.421726733 +0000 UTC m=+361.334063183" Feb 02 13:08:33 crc kubenswrapper[4955]: I0202 13:08:33.016857 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:08:33 crc kubenswrapper[4955]: I0202 13:08:33.017235 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:08:47 crc kubenswrapper[4955]: I0202 13:08:47.115425 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm"] Feb 02 13:08:47 crc kubenswrapper[4955]: I0202 13:08:47.116897 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" podUID="a23a903c-9bd5-49e6-8167-5845b01ece74" containerName="route-controller-manager" containerID="cri-o://0b026277e33bf4947a893f581314873803e9493ff3bbe51f2f9b3b3d74f93c84" gracePeriod=30 Feb 02 13:08:47 crc kubenswrapper[4955]: I0202 13:08:47.491497 4955 generic.go:334] "Generic (PLEG): container finished" podID="a23a903c-9bd5-49e6-8167-5845b01ece74" containerID="0b026277e33bf4947a893f581314873803e9493ff3bbe51f2f9b3b3d74f93c84" exitCode=0 Feb 02 13:08:47 crc kubenswrapper[4955]: I0202 13:08:47.491637 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" event={"ID":"a23a903c-9bd5-49e6-8167-5845b01ece74","Type":"ContainerDied","Data":"0b026277e33bf4947a893f581314873803e9493ff3bbe51f2f9b3b3d74f93c84"} Feb 02 13:08:47 crc kubenswrapper[4955]: I0202 13:08:47.554476 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" Feb 02 13:08:47 crc kubenswrapper[4955]: I0202 13:08:47.601917 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a23a903c-9bd5-49e6-8167-5845b01ece74-serving-cert\") pod \"a23a903c-9bd5-49e6-8167-5845b01ece74\" (UID: \"a23a903c-9bd5-49e6-8167-5845b01ece74\") " Feb 02 13:08:47 crc kubenswrapper[4955]: I0202 13:08:47.601970 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a23a903c-9bd5-49e6-8167-5845b01ece74-config\") pod \"a23a903c-9bd5-49e6-8167-5845b01ece74\" (UID: \"a23a903c-9bd5-49e6-8167-5845b01ece74\") " Feb 02 13:08:47 crc kubenswrapper[4955]: I0202 13:08:47.602014 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6fst\" (UniqueName: \"kubernetes.io/projected/a23a903c-9bd5-49e6-8167-5845b01ece74-kube-api-access-m6fst\") pod \"a23a903c-9bd5-49e6-8167-5845b01ece74\" (UID: \"a23a903c-9bd5-49e6-8167-5845b01ece74\") " Feb 02 13:08:47 crc kubenswrapper[4955]: I0202 13:08:47.602067 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a23a903c-9bd5-49e6-8167-5845b01ece74-client-ca\") pod \"a23a903c-9bd5-49e6-8167-5845b01ece74\" (UID: \"a23a903c-9bd5-49e6-8167-5845b01ece74\") " Feb 02 13:08:47 crc kubenswrapper[4955]: I0202 13:08:47.603051 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a23a903c-9bd5-49e6-8167-5845b01ece74-config" (OuterVolumeSpecName: "config") pod "a23a903c-9bd5-49e6-8167-5845b01ece74" (UID: "a23a903c-9bd5-49e6-8167-5845b01ece74"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:08:47 crc kubenswrapper[4955]: I0202 13:08:47.603693 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a23a903c-9bd5-49e6-8167-5845b01ece74-client-ca" (OuterVolumeSpecName: "client-ca") pod "a23a903c-9bd5-49e6-8167-5845b01ece74" (UID: "a23a903c-9bd5-49e6-8167-5845b01ece74"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:08:47 crc kubenswrapper[4955]: I0202 13:08:47.607452 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a23a903c-9bd5-49e6-8167-5845b01ece74-kube-api-access-m6fst" (OuterVolumeSpecName: "kube-api-access-m6fst") pod "a23a903c-9bd5-49e6-8167-5845b01ece74" (UID: "a23a903c-9bd5-49e6-8167-5845b01ece74"). InnerVolumeSpecName "kube-api-access-m6fst". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:08:47 crc kubenswrapper[4955]: I0202 13:08:47.607532 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a23a903c-9bd5-49e6-8167-5845b01ece74-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a23a903c-9bd5-49e6-8167-5845b01ece74" (UID: "a23a903c-9bd5-49e6-8167-5845b01ece74"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:08:47 crc kubenswrapper[4955]: I0202 13:08:47.704692 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a23a903c-9bd5-49e6-8167-5845b01ece74-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:08:47 crc kubenswrapper[4955]: I0202 13:08:47.704744 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a23a903c-9bd5-49e6-8167-5845b01ece74-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:08:47 crc kubenswrapper[4955]: I0202 13:08:47.704760 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6fst\" (UniqueName: \"kubernetes.io/projected/a23a903c-9bd5-49e6-8167-5845b01ece74-kube-api-access-m6fst\") on node \"crc\" DevicePath \"\"" Feb 02 13:08:47 crc kubenswrapper[4955]: I0202 13:08:47.704774 4955 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a23a903c-9bd5-49e6-8167-5845b01ece74-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:08:48 crc kubenswrapper[4955]: I0202 13:08:48.499674 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" Feb 02 13:08:48 crc kubenswrapper[4955]: I0202 13:08:48.499633 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm" event={"ID":"a23a903c-9bd5-49e6-8167-5845b01ece74","Type":"ContainerDied","Data":"2d3513089ab3b490c7649691c136486295fc736ebfb720a67775fc35acdc10f5"} Feb 02 13:08:48 crc kubenswrapper[4955]: I0202 13:08:48.500506 4955 scope.go:117] "RemoveContainer" containerID="0b026277e33bf4947a893f581314873803e9493ff3bbe51f2f9b3b3d74f93c84" Feb 02 13:08:48 crc kubenswrapper[4955]: I0202 13:08:48.522971 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm"] Feb 02 13:08:48 crc kubenswrapper[4955]: I0202 13:08:48.529926 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-zscjm"] Feb 02 13:08:48 crc kubenswrapper[4955]: I0202 13:08:48.853941 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c84f8985c-cwhb2"] Feb 02 13:08:48 crc kubenswrapper[4955]: E0202 13:08:48.854201 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23a903c-9bd5-49e6-8167-5845b01ece74" containerName="route-controller-manager" Feb 02 13:08:48 crc kubenswrapper[4955]: I0202 13:08:48.854215 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23a903c-9bd5-49e6-8167-5845b01ece74" containerName="route-controller-manager" Feb 02 13:08:48 crc kubenswrapper[4955]: I0202 13:08:48.854361 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="a23a903c-9bd5-49e6-8167-5845b01ece74" containerName="route-controller-manager" Feb 02 13:08:48 crc kubenswrapper[4955]: I0202 13:08:48.855007 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c84f8985c-cwhb2" Feb 02 13:08:48 crc kubenswrapper[4955]: I0202 13:08:48.857292 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 13:08:48 crc kubenswrapper[4955]: I0202 13:08:48.857726 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 13:08:48 crc kubenswrapper[4955]: I0202 13:08:48.859200 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 13:08:48 crc kubenswrapper[4955]: I0202 13:08:48.859530 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 13:08:48 crc kubenswrapper[4955]: I0202 13:08:48.859780 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 13:08:48 crc kubenswrapper[4955]: I0202 13:08:48.860923 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 13:08:48 crc kubenswrapper[4955]: I0202 13:08:48.871528 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c84f8985c-cwhb2"] Feb 02 13:08:48 crc kubenswrapper[4955]: I0202 13:08:48.921711 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cceee2ed-da68-491f-9a11-927ae652b31b-config\") pod \"route-controller-manager-6c84f8985c-cwhb2\" (UID: \"cceee2ed-da68-491f-9a11-927ae652b31b\") " pod="openshift-route-controller-manager/route-controller-manager-6c84f8985c-cwhb2" Feb 02 13:08:48 crc kubenswrapper[4955]: I0202 13:08:48.921750 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cceee2ed-da68-491f-9a11-927ae652b31b-client-ca\") pod \"route-controller-manager-6c84f8985c-cwhb2\" (UID: \"cceee2ed-da68-491f-9a11-927ae652b31b\") " pod="openshift-route-controller-manager/route-controller-manager-6c84f8985c-cwhb2" Feb 02 13:08:48 crc kubenswrapper[4955]: I0202 13:08:48.921788 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cceee2ed-da68-491f-9a11-927ae652b31b-serving-cert\") pod \"route-controller-manager-6c84f8985c-cwhb2\" (UID: \"cceee2ed-da68-491f-9a11-927ae652b31b\") " pod="openshift-route-controller-manager/route-controller-manager-6c84f8985c-cwhb2" Feb 02 13:08:48 crc kubenswrapper[4955]: I0202 13:08:48.921809 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtsp5\" (UniqueName: \"kubernetes.io/projected/cceee2ed-da68-491f-9a11-927ae652b31b-kube-api-access-mtsp5\") pod \"route-controller-manager-6c84f8985c-cwhb2\" (UID: \"cceee2ed-da68-491f-9a11-927ae652b31b\") " pod="openshift-route-controller-manager/route-controller-manager-6c84f8985c-cwhb2" Feb 02 13:08:49 crc kubenswrapper[4955]: I0202 13:08:49.022973 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtsp5\" (UniqueName: \"kubernetes.io/projected/cceee2ed-da68-491f-9a11-927ae652b31b-kube-api-access-mtsp5\") pod \"route-controller-manager-6c84f8985c-cwhb2\" (UID: \"cceee2ed-da68-491f-9a11-927ae652b31b\") " pod="openshift-route-controller-manager/route-controller-manager-6c84f8985c-cwhb2" Feb 02 13:08:49 crc kubenswrapper[4955]: I0202 13:08:49.023103 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cceee2ed-da68-491f-9a11-927ae652b31b-config\") pod \"route-controller-manager-6c84f8985c-cwhb2\" (UID: \"cceee2ed-da68-491f-9a11-927ae652b31b\") " pod="openshift-route-controller-manager/route-controller-manager-6c84f8985c-cwhb2" Feb 02 13:08:49 crc kubenswrapper[4955]: I0202 13:08:49.023142 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cceee2ed-da68-491f-9a11-927ae652b31b-client-ca\") pod \"route-controller-manager-6c84f8985c-cwhb2\" (UID: \"cceee2ed-da68-491f-9a11-927ae652b31b\") " pod="openshift-route-controller-manager/route-controller-manager-6c84f8985c-cwhb2" Feb 02 13:08:49 crc kubenswrapper[4955]: I0202 13:08:49.023201 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cceee2ed-da68-491f-9a11-927ae652b31b-serving-cert\") pod \"route-controller-manager-6c84f8985c-cwhb2\" (UID: \"cceee2ed-da68-491f-9a11-927ae652b31b\") " pod="openshift-route-controller-manager/route-controller-manager-6c84f8985c-cwhb2" Feb 02 13:08:49 crc kubenswrapper[4955]: I0202 13:08:49.024901 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cceee2ed-da68-491f-9a11-927ae652b31b-config\") pod \"route-controller-manager-6c84f8985c-cwhb2\" (UID: \"cceee2ed-da68-491f-9a11-927ae652b31b\") " pod="openshift-route-controller-manager/route-controller-manager-6c84f8985c-cwhb2" Feb 02 13:08:49 crc kubenswrapper[4955]: I0202 13:08:49.029800 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cceee2ed-da68-491f-9a11-927ae652b31b-serving-cert\") pod \"route-controller-manager-6c84f8985c-cwhb2\" (UID: \"cceee2ed-da68-491f-9a11-927ae652b31b\") " pod="openshift-route-controller-manager/route-controller-manager-6c84f8985c-cwhb2" Feb 02 13:08:49 crc kubenswrapper[4955]: I0202 13:08:49.036981 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cceee2ed-da68-491f-9a11-927ae652b31b-client-ca\") pod \"route-controller-manager-6c84f8985c-cwhb2\" (UID: \"cceee2ed-da68-491f-9a11-927ae652b31b\") " pod="openshift-route-controller-manager/route-controller-manager-6c84f8985c-cwhb2" Feb 02 13:08:49 crc kubenswrapper[4955]: I0202 13:08:49.046067 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtsp5\" (UniqueName: \"kubernetes.io/projected/cceee2ed-da68-491f-9a11-927ae652b31b-kube-api-access-mtsp5\") pod \"route-controller-manager-6c84f8985c-cwhb2\" (UID: \"cceee2ed-da68-491f-9a11-927ae652b31b\") " pod="openshift-route-controller-manager/route-controller-manager-6c84f8985c-cwhb2" Feb 02 13:08:49 crc kubenswrapper[4955]: I0202 13:08:49.212032 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c84f8985c-cwhb2" Feb 02 13:08:49 crc kubenswrapper[4955]: I0202 13:08:49.628385 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c84f8985c-cwhb2"] Feb 02 13:08:49 crc kubenswrapper[4955]: I0202 13:08:49.724849 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a23a903c-9bd5-49e6-8167-5845b01ece74" path="/var/lib/kubelet/pods/a23a903c-9bd5-49e6-8167-5845b01ece74/volumes" Feb 02 13:08:50 crc kubenswrapper[4955]: I0202 13:08:50.512533 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c84f8985c-cwhb2" event={"ID":"cceee2ed-da68-491f-9a11-927ae652b31b","Type":"ContainerStarted","Data":"ebbcef0c18c03227fe17b57d20cecbb75cdf6857a0cc990458844b6cd8176918"} Feb 02 13:08:50 crc kubenswrapper[4955]: I0202 13:08:50.512924 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c84f8985c-cwhb2" event={"ID":"cceee2ed-da68-491f-9a11-927ae652b31b","Type":"ContainerStarted","Data":"6366d7f2dd09c2f6697a4639a3e3ad3710534ec84c2dd1cdf7f9b0492d2139cc"} Feb 02 13:08:50 crc kubenswrapper[4955]: I0202 13:08:50.512946 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c84f8985c-cwhb2" Feb 02 13:08:50 crc kubenswrapper[4955]: I0202 13:08:50.519799 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c84f8985c-cwhb2" Feb 02 13:08:50 crc kubenswrapper[4955]: I0202 13:08:50.554641 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c84f8985c-cwhb2" podStartSLOduration=3.5546223340000003 podStartE2EDuration="3.554622334s" podCreationTimestamp="2026-02-02 13:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:08:50.530923915 +0000 UTC m=+381.443260365" watchObservedRunningTime="2026-02-02 13:08:50.554622334 +0000 UTC m=+381.466958784" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.250324 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gv2zr"] Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.251501 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.269939 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gv2zr"] Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.411581 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28ab71d6-560f-4016-b5cc-75e072aadfb6-bound-sa-token\") pod \"image-registry-66df7c8f76-gv2zr\" (UID: \"28ab71d6-560f-4016-b5cc-75e072aadfb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.412576 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/28ab71d6-560f-4016-b5cc-75e072aadfb6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gv2zr\" (UID: \"28ab71d6-560f-4016-b5cc-75e072aadfb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.412614 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkkwd\" (UniqueName: \"kubernetes.io/projected/28ab71d6-560f-4016-b5cc-75e072aadfb6-kube-api-access-bkkwd\") pod \"image-registry-66df7c8f76-gv2zr\" (UID: \"28ab71d6-560f-4016-b5cc-75e072aadfb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.412667 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/28ab71d6-560f-4016-b5cc-75e072aadfb6-registry-certificates\") pod \"image-registry-66df7c8f76-gv2zr\" (UID: \"28ab71d6-560f-4016-b5cc-75e072aadfb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.412700 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/28ab71d6-560f-4016-b5cc-75e072aadfb6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gv2zr\" (UID: \"28ab71d6-560f-4016-b5cc-75e072aadfb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.412988 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gv2zr\" (UID: \"28ab71d6-560f-4016-b5cc-75e072aadfb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.413169 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28ab71d6-560f-4016-b5cc-75e072aadfb6-trusted-ca\") pod \"image-registry-66df7c8f76-gv2zr\" (UID: \"28ab71d6-560f-4016-b5cc-75e072aadfb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.413219 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/28ab71d6-560f-4016-b5cc-75e072aadfb6-registry-tls\") pod \"image-registry-66df7c8f76-gv2zr\" (UID: \"28ab71d6-560f-4016-b5cc-75e072aadfb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.438329 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gv2zr\" (UID: \"28ab71d6-560f-4016-b5cc-75e072aadfb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.514676 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28ab71d6-560f-4016-b5cc-75e072aadfb6-trusted-ca\") pod \"image-registry-66df7c8f76-gv2zr\" (UID: \"28ab71d6-560f-4016-b5cc-75e072aadfb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.515182 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/28ab71d6-560f-4016-b5cc-75e072aadfb6-registry-tls\") pod \"image-registry-66df7c8f76-gv2zr\" (UID: \"28ab71d6-560f-4016-b5cc-75e072aadfb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.515290 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28ab71d6-560f-4016-b5cc-75e072aadfb6-bound-sa-token\") pod \"image-registry-66df7c8f76-gv2zr\" (UID: \"28ab71d6-560f-4016-b5cc-75e072aadfb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.515407 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/28ab71d6-560f-4016-b5cc-75e072aadfb6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gv2zr\" (UID: \"28ab71d6-560f-4016-b5cc-75e072aadfb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.515513 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkkwd\" (UniqueName: \"kubernetes.io/projected/28ab71d6-560f-4016-b5cc-75e072aadfb6-kube-api-access-bkkwd\") pod \"image-registry-66df7c8f76-gv2zr\" (UID: \"28ab71d6-560f-4016-b5cc-75e072aadfb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.515694 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/28ab71d6-560f-4016-b5cc-75e072aadfb6-registry-certificates\") pod \"image-registry-66df7c8f76-gv2zr\" (UID: \"28ab71d6-560f-4016-b5cc-75e072aadfb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.515812 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/28ab71d6-560f-4016-b5cc-75e072aadfb6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gv2zr\" (UID: \"28ab71d6-560f-4016-b5cc-75e072aadfb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.516415 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/28ab71d6-560f-4016-b5cc-75e072aadfb6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gv2zr\" (UID: \"28ab71d6-560f-4016-b5cc-75e072aadfb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.517393 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/28ab71d6-560f-4016-b5cc-75e072aadfb6-registry-certificates\") pod \"image-registry-66df7c8f76-gv2zr\" (UID: \"28ab71d6-560f-4016-b5cc-75e072aadfb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.517691 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28ab71d6-560f-4016-b5cc-75e072aadfb6-trusted-ca\") pod \"image-registry-66df7c8f76-gv2zr\" (UID: \"28ab71d6-560f-4016-b5cc-75e072aadfb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.523413 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/28ab71d6-560f-4016-b5cc-75e072aadfb6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gv2zr\" (UID: \"28ab71d6-560f-4016-b5cc-75e072aadfb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.527325 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/28ab71d6-560f-4016-b5cc-75e072aadfb6-registry-tls\") pod \"image-registry-66df7c8f76-gv2zr\" (UID: \"28ab71d6-560f-4016-b5cc-75e072aadfb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.536280 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28ab71d6-560f-4016-b5cc-75e072aadfb6-bound-sa-token\") pod \"image-registry-66df7c8f76-gv2zr\" (UID: \"28ab71d6-560f-4016-b5cc-75e072aadfb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.550831 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkkwd\" (UniqueName: \"kubernetes.io/projected/28ab71d6-560f-4016-b5cc-75e072aadfb6-kube-api-access-bkkwd\") pod \"image-registry-66df7c8f76-gv2zr\" (UID: \"28ab71d6-560f-4016-b5cc-75e072aadfb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:56 crc kubenswrapper[4955]: I0202 13:08:56.570922 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:57 crc kubenswrapper[4955]: I0202 13:08:57.064994 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gv2zr"] Feb 02 13:08:57 crc kubenswrapper[4955]: I0202 13:08:57.555417 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" event={"ID":"28ab71d6-560f-4016-b5cc-75e072aadfb6","Type":"ContainerStarted","Data":"19974de70786a496aa80ba5952560090f721891b4d578e3e804374bb02cf93e0"} Feb 02 13:08:57 crc kubenswrapper[4955]: I0202 13:08:57.555992 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:08:57 crc kubenswrapper[4955]: I0202 13:08:57.556009 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" event={"ID":"28ab71d6-560f-4016-b5cc-75e072aadfb6","Type":"ContainerStarted","Data":"c1972f9eba1f0bc782f36816d1bce0fc63c16f177004a1e2780bf0bf9fe49ce7"} Feb 02 13:08:57 crc kubenswrapper[4955]: I0202 13:08:57.579215 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" podStartSLOduration=1.579192649 podStartE2EDuration="1.579192649s" podCreationTimestamp="2026-02-02 13:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:08:57.573367339 +0000 UTC m=+388.485703809" watchObservedRunningTime="2026-02-02 13:08:57.579192649 +0000 UTC m=+388.491529099" Feb 02 13:09:03 crc kubenswrapper[4955]: I0202 13:09:03.016441 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:09:03 crc kubenswrapper[4955]: I0202 13:09:03.016785 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:09:03 crc kubenswrapper[4955]: I0202 13:09:03.016829 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:09:03 crc kubenswrapper[4955]: I0202 13:09:03.017401 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e2b066fe3d22716e67cada877eecd7854555a99c0cda44ff4824ac9dad20f74b"} pod="openshift-machine-config-operator/machine-config-daemon-6l62h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:09:03 crc kubenswrapper[4955]: I0202 13:09:03.017470 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" containerID="cri-o://e2b066fe3d22716e67cada877eecd7854555a99c0cda44ff4824ac9dad20f74b" gracePeriod=600 Feb 02 13:09:03 crc kubenswrapper[4955]: I0202 13:09:03.591803 4955 generic.go:334] "Generic (PLEG): container finished" podID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerID="e2b066fe3d22716e67cada877eecd7854555a99c0cda44ff4824ac9dad20f74b" exitCode=0 Feb 02 13:09:03 crc kubenswrapper[4955]: I0202 13:09:03.591923 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerDied","Data":"e2b066fe3d22716e67cada877eecd7854555a99c0cda44ff4824ac9dad20f74b"} Feb 02 13:09:03 crc kubenswrapper[4955]: I0202 13:09:03.592127 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerStarted","Data":"15ec1b1ba75d775d8ebc23447ae7b707fd98515f86f18a1cbc9275eaecb69192"} Feb 02 13:09:03 crc kubenswrapper[4955]: I0202 13:09:03.592156 4955 scope.go:117] "RemoveContainer" containerID="7b004a01a229dc29bd71d866fbe5dbaa7b0d9742e344848cb377e2b4d67b459d" Feb 02 13:09:16 crc kubenswrapper[4955]: I0202 13:09:16.576864 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-gv2zr" Feb 02 13:09:16 crc kubenswrapper[4955]: I0202 13:09:16.654380 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rrdgr"] Feb 02 13:09:41 crc kubenswrapper[4955]: I0202 13:09:41.702361 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" podUID="d19da25f-25c6-4654-86a1-f681e982e738" containerName="registry" containerID="cri-o://2ba3c6c4374ef5be37a46252d1225126e6ec5c7e937c3e56a06a27ef5d432057" gracePeriod=30 Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.138912 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.334692 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d19da25f-25c6-4654-86a1-f681e982e738-trusted-ca\") pod \"d19da25f-25c6-4654-86a1-f681e982e738\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.334799 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d19da25f-25c6-4654-86a1-f681e982e738-registry-certificates\") pod \"d19da25f-25c6-4654-86a1-f681e982e738\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.334848 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d19da25f-25c6-4654-86a1-f681e982e738-registry-tls\") pod \"d19da25f-25c6-4654-86a1-f681e982e738\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.334912 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d19da25f-25c6-4654-86a1-f681e982e738-bound-sa-token\") pod \"d19da25f-25c6-4654-86a1-f681e982e738\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.334976 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96q9r\" (UniqueName: \"kubernetes.io/projected/d19da25f-25c6-4654-86a1-f681e982e738-kube-api-access-96q9r\") pod \"d19da25f-25c6-4654-86a1-f681e982e738\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.335132 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d19da25f-25c6-4654-86a1-f681e982e738\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.335226 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d19da25f-25c6-4654-86a1-f681e982e738-ca-trust-extracted\") pod \"d19da25f-25c6-4654-86a1-f681e982e738\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.335279 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d19da25f-25c6-4654-86a1-f681e982e738-installation-pull-secrets\") pod \"d19da25f-25c6-4654-86a1-f681e982e738\" (UID: \"d19da25f-25c6-4654-86a1-f681e982e738\") " Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.336018 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19da25f-25c6-4654-86a1-f681e982e738-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d19da25f-25c6-4654-86a1-f681e982e738" (UID: "d19da25f-25c6-4654-86a1-f681e982e738"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.336227 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19da25f-25c6-4654-86a1-f681e982e738-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d19da25f-25c6-4654-86a1-f681e982e738" (UID: "d19da25f-25c6-4654-86a1-f681e982e738"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.342752 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19da25f-25c6-4654-86a1-f681e982e738-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d19da25f-25c6-4654-86a1-f681e982e738" (UID: "d19da25f-25c6-4654-86a1-f681e982e738"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.342780 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19da25f-25c6-4654-86a1-f681e982e738-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d19da25f-25c6-4654-86a1-f681e982e738" (UID: "d19da25f-25c6-4654-86a1-f681e982e738"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.343362 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19da25f-25c6-4654-86a1-f681e982e738-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d19da25f-25c6-4654-86a1-f681e982e738" (UID: "d19da25f-25c6-4654-86a1-f681e982e738"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.344028 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19da25f-25c6-4654-86a1-f681e982e738-kube-api-access-96q9r" (OuterVolumeSpecName: "kube-api-access-96q9r") pod "d19da25f-25c6-4654-86a1-f681e982e738" (UID: "d19da25f-25c6-4654-86a1-f681e982e738"). InnerVolumeSpecName "kube-api-access-96q9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.355249 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d19da25f-25c6-4654-86a1-f681e982e738" (UID: "d19da25f-25c6-4654-86a1-f681e982e738"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.355614 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d19da25f-25c6-4654-86a1-f681e982e738-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d19da25f-25c6-4654-86a1-f681e982e738" (UID: "d19da25f-25c6-4654-86a1-f681e982e738"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.436631 4955 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d19da25f-25c6-4654-86a1-f681e982e738-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.436670 4955 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d19da25f-25c6-4654-86a1-f681e982e738-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.436686 4955 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d19da25f-25c6-4654-86a1-f681e982e738-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.436698 4955 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d19da25f-25c6-4654-86a1-f681e982e738-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.436711 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96q9r\" (UniqueName: \"kubernetes.io/projected/d19da25f-25c6-4654-86a1-f681e982e738-kube-api-access-96q9r\") on node \"crc\" DevicePath \"\"" Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.436722 4955 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d19da25f-25c6-4654-86a1-f681e982e738-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.436734 4955 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d19da25f-25c6-4654-86a1-f681e982e738-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.825731 4955 generic.go:334] "Generic (PLEG): container finished" podID="d19da25f-25c6-4654-86a1-f681e982e738" containerID="2ba3c6c4374ef5be37a46252d1225126e6ec5c7e937c3e56a06a27ef5d432057" exitCode=0 Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.825867 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.825886 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" event={"ID":"d19da25f-25c6-4654-86a1-f681e982e738","Type":"ContainerDied","Data":"2ba3c6c4374ef5be37a46252d1225126e6ec5c7e937c3e56a06a27ef5d432057"} Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.826295 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rrdgr" event={"ID":"d19da25f-25c6-4654-86a1-f681e982e738","Type":"ContainerDied","Data":"5dd44c10af024d9de3471f83ab478b8ad6c88c85a4e6c641f4eaece1b800a915"} Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.826317 4955 scope.go:117] "RemoveContainer" containerID="2ba3c6c4374ef5be37a46252d1225126e6ec5c7e937c3e56a06a27ef5d432057" Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.848801 4955 scope.go:117] "RemoveContainer" containerID="2ba3c6c4374ef5be37a46252d1225126e6ec5c7e937c3e56a06a27ef5d432057" Feb 02 13:09:42 crc kubenswrapper[4955]: E0202 13:09:42.849388 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ba3c6c4374ef5be37a46252d1225126e6ec5c7e937c3e56a06a27ef5d432057\": container with ID starting with 2ba3c6c4374ef5be37a46252d1225126e6ec5c7e937c3e56a06a27ef5d432057 not found: ID does not exist" containerID="2ba3c6c4374ef5be37a46252d1225126e6ec5c7e937c3e56a06a27ef5d432057" Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.849437 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ba3c6c4374ef5be37a46252d1225126e6ec5c7e937c3e56a06a27ef5d432057"} err="failed to get container status \"2ba3c6c4374ef5be37a46252d1225126e6ec5c7e937c3e56a06a27ef5d432057\": rpc error: code = NotFound desc = could not find container \"2ba3c6c4374ef5be37a46252d1225126e6ec5c7e937c3e56a06a27ef5d432057\": container with ID starting with 2ba3c6c4374ef5be37a46252d1225126e6ec5c7e937c3e56a06a27ef5d432057 not found: ID does not exist" Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.858946 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rrdgr"] Feb 02 13:09:42 crc kubenswrapper[4955]: I0202 13:09:42.862524 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rrdgr"] Feb 02 13:09:43 crc kubenswrapper[4955]: I0202 13:09:43.730026 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19da25f-25c6-4654-86a1-f681e982e738" path="/var/lib/kubelet/pods/d19da25f-25c6-4654-86a1-f681e982e738/volumes" Feb 02 13:11:03 crc kubenswrapper[4955]: I0202 13:11:03.016898 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:11:03 crc kubenswrapper[4955]: I0202 13:11:03.017401 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:11:33 crc kubenswrapper[4955]: I0202 13:11:33.017164 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:11:33 crc kubenswrapper[4955]: I0202 13:11:33.017695 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.431689 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-8j9ng"] Feb 02 13:11:41 crc kubenswrapper[4955]: E0202 13:11:41.432493 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d19da25f-25c6-4654-86a1-f681e982e738" containerName="registry" Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.432512 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="d19da25f-25c6-4654-86a1-f681e982e738" containerName="registry" Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.432654 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="d19da25f-25c6-4654-86a1-f681e982e738" containerName="registry" Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.433156 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8j9ng" Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.435713 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.436125 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.443320 4955 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-xgv4f" Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.449262 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-8j9ng"] Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.459016 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-f9lzz"] Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.459779 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-f9lzz" Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.461938 4955 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-hfqxc" Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.466514 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-f9lzz"] Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.476454 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-2t7cg"] Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.477265 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-2t7cg" Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.479391 4955 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-5h5xp" Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.488046 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwfxh\" (UniqueName: \"kubernetes.io/projected/2d029f2e-d391-4487-9c7d-1141c569de70-kube-api-access-hwfxh\") pod \"cert-manager-858654f9db-f9lzz\" (UID: \"2d029f2e-d391-4487-9c7d-1141c569de70\") " pod="cert-manager/cert-manager-858654f9db-f9lzz" Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.488151 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlq9s\" (UniqueName: \"kubernetes.io/projected/b9a86428-fca6-4380-88ff-785b5710dc8d-kube-api-access-nlq9s\") pod \"cert-manager-cainjector-cf98fcc89-8j9ng\" (UID: \"b9a86428-fca6-4380-88ff-785b5710dc8d\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-8j9ng" Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.488266 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-2t7cg"] Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.589188 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlq9s\" (UniqueName: \"kubernetes.io/projected/b9a86428-fca6-4380-88ff-785b5710dc8d-kube-api-access-nlq9s\") pod \"cert-manager-cainjector-cf98fcc89-8j9ng\" (UID: \"b9a86428-fca6-4380-88ff-785b5710dc8d\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-8j9ng" Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.589246 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zddhg\" (UniqueName: \"kubernetes.io/projected/07c0146f-f36a-4055-8cad-b6c65b94ddf4-kube-api-access-zddhg\") pod \"cert-manager-webhook-687f57d79b-2t7cg\" (UID: \"07c0146f-f36a-4055-8cad-b6c65b94ddf4\") " pod="cert-manager/cert-manager-webhook-687f57d79b-2t7cg" Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.589280 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwfxh\" (UniqueName: \"kubernetes.io/projected/2d029f2e-d391-4487-9c7d-1141c569de70-kube-api-access-hwfxh\") pod \"cert-manager-858654f9db-f9lzz\" (UID: \"2d029f2e-d391-4487-9c7d-1141c569de70\") " pod="cert-manager/cert-manager-858654f9db-f9lzz" Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.606091 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwfxh\" (UniqueName: \"kubernetes.io/projected/2d029f2e-d391-4487-9c7d-1141c569de70-kube-api-access-hwfxh\") pod \"cert-manager-858654f9db-f9lzz\" (UID: \"2d029f2e-d391-4487-9c7d-1141c569de70\") " pod="cert-manager/cert-manager-858654f9db-f9lzz" Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.616545 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlq9s\" (UniqueName: \"kubernetes.io/projected/b9a86428-fca6-4380-88ff-785b5710dc8d-kube-api-access-nlq9s\") pod \"cert-manager-cainjector-cf98fcc89-8j9ng\" (UID: \"b9a86428-fca6-4380-88ff-785b5710dc8d\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-8j9ng" Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.690478 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zddhg\" (UniqueName: \"kubernetes.io/projected/07c0146f-f36a-4055-8cad-b6c65b94ddf4-kube-api-access-zddhg\") pod \"cert-manager-webhook-687f57d79b-2t7cg\" (UID: \"07c0146f-f36a-4055-8cad-b6c65b94ddf4\") " pod="cert-manager/cert-manager-webhook-687f57d79b-2t7cg" Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.710764 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zddhg\" (UniqueName: \"kubernetes.io/projected/07c0146f-f36a-4055-8cad-b6c65b94ddf4-kube-api-access-zddhg\") pod \"cert-manager-webhook-687f57d79b-2t7cg\" (UID: \"07c0146f-f36a-4055-8cad-b6c65b94ddf4\") " pod="cert-manager/cert-manager-webhook-687f57d79b-2t7cg" Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.760399 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8j9ng" Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.776857 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-f9lzz" Feb 02 13:11:41 crc kubenswrapper[4955]: I0202 13:11:41.795637 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-2t7cg" Feb 02 13:11:42 crc kubenswrapper[4955]: I0202 13:11:42.187670 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-f9lzz"] Feb 02 13:11:42 crc kubenswrapper[4955]: I0202 13:11:42.193053 4955 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 13:11:42 crc kubenswrapper[4955]: I0202 13:11:42.223666 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-8j9ng"] Feb 02 13:11:42 crc kubenswrapper[4955]: W0202 13:11:42.226507 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9a86428_fca6_4380_88ff_785b5710dc8d.slice/crio-282e5b5f49c35736c2d2b0f23980276093ac25cd2ccb0a66ae6d6a60846de50a WatchSource:0}: Error finding container 282e5b5f49c35736c2d2b0f23980276093ac25cd2ccb0a66ae6d6a60846de50a: Status 404 returned error can't find the container with id 282e5b5f49c35736c2d2b0f23980276093ac25cd2ccb0a66ae6d6a60846de50a Feb 02 13:11:42 crc kubenswrapper[4955]: I0202 13:11:42.230600 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-2t7cg"] Feb 02 13:11:42 crc kubenswrapper[4955]: W0202 13:11:42.235227 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07c0146f_f36a_4055_8cad_b6c65b94ddf4.slice/crio-a33e518f38b9f0074ed5e9bee093bfa3a95a9f1f5a25058071e7e30fe6008a22 WatchSource:0}: Error finding container a33e518f38b9f0074ed5e9bee093bfa3a95a9f1f5a25058071e7e30fe6008a22: Status 404 returned error can't find the container with id a33e518f38b9f0074ed5e9bee093bfa3a95a9f1f5a25058071e7e30fe6008a22 Feb 02 13:11:42 crc kubenswrapper[4955]: I0202 13:11:42.440514 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-2t7cg" event={"ID":"07c0146f-f36a-4055-8cad-b6c65b94ddf4","Type":"ContainerStarted","Data":"a33e518f38b9f0074ed5e9bee093bfa3a95a9f1f5a25058071e7e30fe6008a22"} Feb 02 13:11:42 crc kubenswrapper[4955]: I0202 13:11:42.441594 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-f9lzz" event={"ID":"2d029f2e-d391-4487-9c7d-1141c569de70","Type":"ContainerStarted","Data":"4a5c35bb30aeb59824addc653c5396543d18365cb31bc3452c724db2214c47f4"} Feb 02 13:11:42 crc kubenswrapper[4955]: I0202 13:11:42.442348 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8j9ng" event={"ID":"b9a86428-fca6-4380-88ff-785b5710dc8d","Type":"ContainerStarted","Data":"282e5b5f49c35736c2d2b0f23980276093ac25cd2ccb0a66ae6d6a60846de50a"} Feb 02 13:11:46 crc kubenswrapper[4955]: I0202 13:11:46.465091 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-2t7cg" event={"ID":"07c0146f-f36a-4055-8cad-b6c65b94ddf4","Type":"ContainerStarted","Data":"e78791aa3e7a12be76611e17d350e60f88298093e2d7b8c507104c15c16a8354"} Feb 02 13:11:46 crc kubenswrapper[4955]: I0202 13:11:46.465589 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-2t7cg" Feb 02 13:11:46 crc kubenswrapper[4955]: I0202 13:11:46.466208 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-f9lzz" event={"ID":"2d029f2e-d391-4487-9c7d-1141c569de70","Type":"ContainerStarted","Data":"1521bd481859dffadc7780ea411c1f53ac02c37ad486d28105ec2d57c575003e"} Feb 02 13:11:46 crc kubenswrapper[4955]: I0202 13:11:46.467262 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8j9ng" event={"ID":"b9a86428-fca6-4380-88ff-785b5710dc8d","Type":"ContainerStarted","Data":"f30709fa124da488d1881fd317b151acd75b4be65d4b579cff1e2db127ed0703"} Feb 02 13:11:46 crc kubenswrapper[4955]: I0202 13:11:46.484037 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-2t7cg" podStartSLOduration=1.610158698 podStartE2EDuration="5.484016797s" podCreationTimestamp="2026-02-02 13:11:41 +0000 UTC" firstStartedPulling="2026-02-02 13:11:42.237208564 +0000 UTC m=+553.149545014" lastFinishedPulling="2026-02-02 13:11:46.111066663 +0000 UTC m=+557.023403113" observedRunningTime="2026-02-02 13:11:46.48073315 +0000 UTC m=+557.393069600" watchObservedRunningTime="2026-02-02 13:11:46.484016797 +0000 UTC m=+557.396353257" Feb 02 13:11:46 crc kubenswrapper[4955]: I0202 13:11:46.498457 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-f9lzz" podStartSLOduration=1.546519897 podStartE2EDuration="5.498427931s" podCreationTimestamp="2026-02-02 13:11:41 +0000 UTC" firstStartedPulling="2026-02-02 13:11:42.192830592 +0000 UTC m=+553.105167032" lastFinishedPulling="2026-02-02 13:11:46.144738596 +0000 UTC m=+557.057075066" observedRunningTime="2026-02-02 13:11:46.496095697 +0000 UTC m=+557.408432157" watchObservedRunningTime="2026-02-02 13:11:46.498427931 +0000 UTC m=+557.410764381" Feb 02 13:11:46 crc kubenswrapper[4955]: I0202 13:11:46.512882 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8j9ng" podStartSLOduration=1.619358861 podStartE2EDuration="5.512864877s" podCreationTimestamp="2026-02-02 13:11:41 +0000 UTC" firstStartedPulling="2026-02-02 13:11:42.228767817 +0000 UTC m=+553.141104267" lastFinishedPulling="2026-02-02 13:11:46.122273823 +0000 UTC m=+557.034610283" observedRunningTime="2026-02-02 13:11:46.510806029 +0000 UTC m=+557.423142479" watchObservedRunningTime="2026-02-02 13:11:46.512864877 +0000 UTC m=+557.425201327" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.419306 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z2cps"] Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.420106 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovn-controller" containerID="cri-o://f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862" gracePeriod=30 Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.420151 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="nbdb" containerID="cri-o://7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6" gracePeriod=30 Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.420238 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="northd" containerID="cri-o://a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411" gracePeriod=30 Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.420284 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36" gracePeriod=30 Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.420326 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="kube-rbac-proxy-node" containerID="cri-o://2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e" gracePeriod=30 Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.420360 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovn-acl-logging" containerID="cri-o://06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b" gracePeriod=30 Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.420588 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="sbdb" containerID="cri-o://9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4" gracePeriod=30 Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.458351 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovnkube-controller" containerID="cri-o://2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381" gracePeriod=30 Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.760758 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z2cps_e0d35d22-ea6a-4ada-a086-b199c153c940/ovnkube-controller/3.log" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.763199 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z2cps_e0d35d22-ea6a-4ada-a086-b199c153c940/ovn-acl-logging/0.log" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.763957 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z2cps_e0d35d22-ea6a-4ada-a086-b199c153c940/ovn-controller/0.log" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.764535 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.798365 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-2t7cg" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.842288 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-g7z6n"] Feb 02 13:11:51 crc kubenswrapper[4955]: E0202 13:11:51.842598 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovn-controller" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.842620 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovn-controller" Feb 02 13:11:51 crc kubenswrapper[4955]: E0202 13:11:51.842636 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovnkube-controller" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.842649 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovnkube-controller" Feb 02 13:11:51 crc kubenswrapper[4955]: E0202 13:11:51.842661 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="sbdb" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.842672 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="sbdb" Feb 02 13:11:51 crc kubenswrapper[4955]: E0202 13:11:51.842687 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="northd" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.842689 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-kubelet\") pod \"e0d35d22-ea6a-4ada-a086-b199c153c940\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.842744 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e0d35d22-ea6a-4ada-a086-b199c153c940-ovnkube-script-lib\") pod \"e0d35d22-ea6a-4ada-a086-b199c153c940\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.842768 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "e0d35d22-ea6a-4ada-a086-b199c153c940" (UID: "e0d35d22-ea6a-4ada-a086-b199c153c940"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.842824 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-var-lib-cni-networks-ovn-kubernetes\") pod \"e0d35d22-ea6a-4ada-a086-b199c153c940\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.842697 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="northd" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.842871 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "e0d35d22-ea6a-4ada-a086-b199c153c940" (UID: "e0d35d22-ea6a-4ada-a086-b199c153c940"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.842878 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-run-ovn-kubernetes\") pod \"e0d35d22-ea6a-4ada-a086-b199c153c940\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.842908 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "e0d35d22-ea6a-4ada-a086-b199c153c940" (UID: "e0d35d22-ea6a-4ada-a086-b199c153c940"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:11:51 crc kubenswrapper[4955]: E0202 13:11:51.842914 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.842938 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.842941 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-cni-bin\") pod \"e0d35d22-ea6a-4ada-a086-b199c153c940\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.842963 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-cni-netd\") pod \"e0d35d22-ea6a-4ada-a086-b199c153c940\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.842986 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-run-systemd\") pod \"e0d35d22-ea6a-4ada-a086-b199c153c940\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843017 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-var-lib-openvswitch\") pod \"e0d35d22-ea6a-4ada-a086-b199c153c940\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843042 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e0d35d22-ea6a-4ada-a086-b199c153c940-ovnkube-config\") pod \"e0d35d22-ea6a-4ada-a086-b199c153c940\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843034 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "e0d35d22-ea6a-4ada-a086-b199c153c940" (UID: "e0d35d22-ea6a-4ada-a086-b199c153c940"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843049 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "e0d35d22-ea6a-4ada-a086-b199c153c940" (UID: "e0d35d22-ea6a-4ada-a086-b199c153c940"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843067 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-run-netns\") pod \"e0d35d22-ea6a-4ada-a086-b199c153c940\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843131 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e0d35d22-ea6a-4ada-a086-b199c153c940-ovn-node-metrics-cert\") pod \"e0d35d22-ea6a-4ada-a086-b199c153c940\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843206 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-slash\") pod \"e0d35d22-ea6a-4ada-a086-b199c153c940\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843225 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-etc-openvswitch\") pod \"e0d35d22-ea6a-4ada-a086-b199c153c940\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843264 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-run-ovn\") pod \"e0d35d22-ea6a-4ada-a086-b199c153c940\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843280 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-node-log\") pod \"e0d35d22-ea6a-4ada-a086-b199c153c940\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843296 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-systemd-units\") pod \"e0d35d22-ea6a-4ada-a086-b199c153c940\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843311 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-log-socket\") pod \"e0d35d22-ea6a-4ada-a086-b199c153c940\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843091 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "e0d35d22-ea6a-4ada-a086-b199c153c940" (UID: "e0d35d22-ea6a-4ada-a086-b199c153c940"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843111 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "e0d35d22-ea6a-4ada-a086-b199c153c940" (UID: "e0d35d22-ea6a-4ada-a086-b199c153c940"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843356 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lr87\" (UniqueName: \"kubernetes.io/projected/e0d35d22-ea6a-4ada-a086-b199c153c940-kube-api-access-8lr87\") pod \"e0d35d22-ea6a-4ada-a086-b199c153c940\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843292 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0d35d22-ea6a-4ada-a086-b199c153c940-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "e0d35d22-ea6a-4ada-a086-b199c153c940" (UID: "e0d35d22-ea6a-4ada-a086-b199c153c940"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843329 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "e0d35d22-ea6a-4ada-a086-b199c153c940" (UID: "e0d35d22-ea6a-4ada-a086-b199c153c940"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843386 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-run-openvswitch\") pod \"e0d35d22-ea6a-4ada-a086-b199c153c940\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843405 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e0d35d22-ea6a-4ada-a086-b199c153c940-env-overrides\") pod \"e0d35d22-ea6a-4ada-a086-b199c153c940\" (UID: \"e0d35d22-ea6a-4ada-a086-b199c153c940\") " Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843636 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0d35d22-ea6a-4ada-a086-b199c153c940-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "e0d35d22-ea6a-4ada-a086-b199c153c940" (UID: "e0d35d22-ea6a-4ada-a086-b199c153c940"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:11:51 crc kubenswrapper[4955]: E0202 13:11:51.842966 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="kubecfg-setup" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843671 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="kubecfg-setup" Feb 02 13:11:51 crc kubenswrapper[4955]: E0202 13:11:51.843700 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovnkube-controller" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843710 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovnkube-controller" Feb 02 13:11:51 crc kubenswrapper[4955]: E0202 13:11:51.843726 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="nbdb" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843735 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="nbdb" Feb 02 13:11:51 crc kubenswrapper[4955]: E0202 13:11:51.843758 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovnkube-controller" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843784 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovnkube-controller" Feb 02 13:11:51 crc kubenswrapper[4955]: E0202 13:11:51.843797 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovnkube-controller" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843805 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovnkube-controller" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843816 4955 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e0d35d22-ea6a-4ada-a086-b199c153c940-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843831 4955 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843842 4955 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843845 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-slash" (OuterVolumeSpecName: "host-slash") pod "e0d35d22-ea6a-4ada-a086-b199c153c940" (UID: "e0d35d22-ea6a-4ada-a086-b199c153c940"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843852 4955 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843872 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "e0d35d22-ea6a-4ada-a086-b199c153c940" (UID: "e0d35d22-ea6a-4ada-a086-b199c153c940"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843886 4955 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843894 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-node-log" (OuterVolumeSpecName: "node-log") pod "e0d35d22-ea6a-4ada-a086-b199c153c940" (UID: "e0d35d22-ea6a-4ada-a086-b199c153c940"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843902 4955 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e0d35d22-ea6a-4ada-a086-b199c153c940-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843912 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "e0d35d22-ea6a-4ada-a086-b199c153c940" (UID: "e0d35d22-ea6a-4ada-a086-b199c153c940"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843922 4955 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843945 4955 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843965 4955 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843982 4955 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.843930 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-log-socket" (OuterVolumeSpecName: "log-socket") pod "e0d35d22-ea6a-4ada-a086-b199c153c940" (UID: "e0d35d22-ea6a-4ada-a086-b199c153c940"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:11:51 crc kubenswrapper[4955]: E0202 13:11:51.843821 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovn-acl-logging" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.844019 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovn-acl-logging" Feb 02 13:11:51 crc kubenswrapper[4955]: E0202 13:11:51.844037 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="kube-rbac-proxy-node" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.844049 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="kube-rbac-proxy-node" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.844462 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovnkube-controller" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.844484 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovnkube-controller" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.844498 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="kube-rbac-proxy-node" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.844508 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.844526 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovn-controller" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.844595 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovn-acl-logging" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.844605 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="sbdb" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.844615 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="northd" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.844626 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="nbdb" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.844636 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovnkube-controller" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.844634 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0d35d22-ea6a-4ada-a086-b199c153c940-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "e0d35d22-ea6a-4ada-a086-b199c153c940" (UID: "e0d35d22-ea6a-4ada-a086-b199c153c940"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.844688 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "e0d35d22-ea6a-4ada-a086-b199c153c940" (UID: "e0d35d22-ea6a-4ada-a086-b199c153c940"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:11:51 crc kubenswrapper[4955]: E0202 13:11:51.844786 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovnkube-controller" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.844798 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovnkube-controller" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.844927 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovnkube-controller" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.844944 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerName="ovnkube-controller" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.848334 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.849346 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0d35d22-ea6a-4ada-a086-b199c153c940-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "e0d35d22-ea6a-4ada-a086-b199c153c940" (UID: "e0d35d22-ea6a-4ada-a086-b199c153c940"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.850168 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0d35d22-ea6a-4ada-a086-b199c153c940-kube-api-access-8lr87" (OuterVolumeSpecName: "kube-api-access-8lr87") pod "e0d35d22-ea6a-4ada-a086-b199c153c940" (UID: "e0d35d22-ea6a-4ada-a086-b199c153c940"). InnerVolumeSpecName "kube-api-access-8lr87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.861300 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "e0d35d22-ea6a-4ada-a086-b199c153c940" (UID: "e0d35d22-ea6a-4ada-a086-b199c153c940"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.945548 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftdr5\" (UniqueName: \"kubernetes.io/projected/907addaf-fd40-44aa-9a8f-4457d984bdad-kube-api-access-ftdr5\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.945662 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-host-cni-bin\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.945710 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-var-lib-openvswitch\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.945736 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-run-openvswitch\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.945811 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/907addaf-fd40-44aa-9a8f-4457d984bdad-ovnkube-config\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.945854 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-host-slash\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.945973 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/907addaf-fd40-44aa-9a8f-4457d984bdad-env-overrides\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.946000 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-host-kubelet\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.946022 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-systemd-units\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.946045 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/907addaf-fd40-44aa-9a8f-4457d984bdad-ovnkube-script-lib\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.946068 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-run-systemd\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.946104 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-host-run-netns\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.946125 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-etc-openvswitch\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.946140 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/907addaf-fd40-44aa-9a8f-4457d984bdad-ovn-node-metrics-cert\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.946159 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-log-socket\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.946279 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.946367 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-node-log\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.946408 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-host-run-ovn-kubernetes\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.946445 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-host-cni-netd\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.946541 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-run-ovn\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.946653 4955 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-host-slash\") on node \"crc\" DevicePath \"\"" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.946671 4955 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.946684 4955 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-node-log\") on node \"crc\" DevicePath \"\"" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.946695 4955 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-log-socket\") on node \"crc\" DevicePath \"\"" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.946707 4955 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.946722 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lr87\" (UniqueName: \"kubernetes.io/projected/e0d35d22-ea6a-4ada-a086-b199c153c940-kube-api-access-8lr87\") on node \"crc\" DevicePath \"\"" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.946735 4955 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.946747 4955 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e0d35d22-ea6a-4ada-a086-b199c153c940-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.946759 4955 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e0d35d22-ea6a-4ada-a086-b199c153c940-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 02 13:11:51 crc kubenswrapper[4955]: I0202 13:11:51.946771 4955 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e0d35d22-ea6a-4ada-a086-b199c153c940-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.047812 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.047880 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-node-log\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.047911 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.047916 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-host-run-ovn-kubernetes\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.047969 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-host-cni-netd\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048002 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-run-ovn\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048023 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftdr5\" (UniqueName: \"kubernetes.io/projected/907addaf-fd40-44aa-9a8f-4457d984bdad-kube-api-access-ftdr5\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048019 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-node-log\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048048 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-host-cni-bin\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048064 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-var-lib-openvswitch\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048081 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-run-openvswitch\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048088 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-host-cni-netd\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048114 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-host-cni-bin\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048101 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/907addaf-fd40-44aa-9a8f-4457d984bdad-ovnkube-config\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048166 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-var-lib-openvswitch\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.047948 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-host-run-ovn-kubernetes\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048196 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-host-slash\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048236 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/907addaf-fd40-44aa-9a8f-4457d984bdad-env-overrides\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048257 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-run-openvswitch\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048096 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-run-ovn\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048282 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-host-kubelet\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048320 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-host-slash\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048322 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-systemd-units\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048358 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-systemd-units\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048375 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/907addaf-fd40-44aa-9a8f-4457d984bdad-ovnkube-script-lib\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048412 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-run-systemd\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048462 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-host-run-netns\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048481 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-host-kubelet\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048488 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-etc-openvswitch\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048525 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/907addaf-fd40-44aa-9a8f-4457d984bdad-ovn-node-metrics-cert\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048531 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-etc-openvswitch\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048584 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-log-socket\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048645 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-log-socket\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048765 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-run-systemd\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.048812 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/907addaf-fd40-44aa-9a8f-4457d984bdad-host-run-netns\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.049175 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/907addaf-fd40-44aa-9a8f-4457d984bdad-env-overrides\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.049363 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/907addaf-fd40-44aa-9a8f-4457d984bdad-ovnkube-config\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.049465 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/907addaf-fd40-44aa-9a8f-4457d984bdad-ovnkube-script-lib\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.052112 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/907addaf-fd40-44aa-9a8f-4457d984bdad-ovn-node-metrics-cert\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.069354 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftdr5\" (UniqueName: \"kubernetes.io/projected/907addaf-fd40-44aa-9a8f-4457d984bdad-kube-api-access-ftdr5\") pod \"ovnkube-node-g7z6n\" (UID: \"907addaf-fd40-44aa-9a8f-4457d984bdad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.167166 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.503159 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z2cps_e0d35d22-ea6a-4ada-a086-b199c153c940/ovnkube-controller/3.log" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.505169 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z2cps_e0d35d22-ea6a-4ada-a086-b199c153c940/ovn-acl-logging/0.log" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.505672 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z2cps_e0d35d22-ea6a-4ada-a086-b199c153c940/ovn-controller/0.log" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506019 4955 generic.go:334] "Generic (PLEG): container finished" podID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerID="2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381" exitCode=0 Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506059 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerDied","Data":"2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506101 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerDied","Data":"9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506110 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506128 4955 scope.go:117] "RemoveContainer" containerID="2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506066 4955 generic.go:334] "Generic (PLEG): container finished" podID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerID="9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4" exitCode=0 Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506229 4955 generic.go:334] "Generic (PLEG): container finished" podID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerID="7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6" exitCode=0 Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506260 4955 generic.go:334] "Generic (PLEG): container finished" podID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerID="a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411" exitCode=0 Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506272 4955 generic.go:334] "Generic (PLEG): container finished" podID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerID="f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36" exitCode=0 Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506280 4955 generic.go:334] "Generic (PLEG): container finished" podID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerID="2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e" exitCode=0 Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506288 4955 generic.go:334] "Generic (PLEG): container finished" podID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerID="06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b" exitCode=143 Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506298 4955 generic.go:334] "Generic (PLEG): container finished" podID="e0d35d22-ea6a-4ada-a086-b199c153c940" containerID="f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862" exitCode=143 Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506276 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerDied","Data":"7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506323 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerDied","Data":"a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506339 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerDied","Data":"f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506353 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerDied","Data":"2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506366 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506381 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506388 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506395 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506402 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506408 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506415 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506422 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506428 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506436 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerDied","Data":"06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506446 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506454 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506461 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506468 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506476 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506482 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506513 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506521 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506526 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506532 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506542 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerDied","Data":"f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506570 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506580 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506587 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506594 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506601 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506614 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506622 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506629 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506635 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506641 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506651 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z2cps" event={"ID":"e0d35d22-ea6a-4ada-a086-b199c153c940","Type":"ContainerDied","Data":"c1189fb33c8dc3937d32079c51d8b7d30e954f132051330fbb745dc397111107"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506684 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506710 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506806 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506859 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506867 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506873 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506880 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506887 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506893 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.506900 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.507608 4955 generic.go:334] "Generic (PLEG): container finished" podID="907addaf-fd40-44aa-9a8f-4457d984bdad" containerID="9c56912eb7587631df44d7bf67a37cf638c6c3a482cc27275bfad1f33ef37435" exitCode=0 Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.507654 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" event={"ID":"907addaf-fd40-44aa-9a8f-4457d984bdad","Type":"ContainerDied","Data":"9c56912eb7587631df44d7bf67a37cf638c6c3a482cc27275bfad1f33ef37435"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.507671 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" event={"ID":"907addaf-fd40-44aa-9a8f-4457d984bdad","Type":"ContainerStarted","Data":"def9d77b54f0f0529aafe08cfd7ecf926bd97a6843c0a46f64ee60e9f7b516de"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.510124 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7bpsz_93e471b4-0f7f-4216-8f9c-911f21b64e1e/kube-multus/2.log" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.511026 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7bpsz_93e471b4-0f7f-4216-8f9c-911f21b64e1e/kube-multus/1.log" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.511073 4955 generic.go:334] "Generic (PLEG): container finished" podID="93e471b4-0f7f-4216-8f9c-911f21b64e1e" containerID="6c3e909a5c1d539466ff95169b2a61636dd2e41a3596e08414bd64392d29dd9f" exitCode=2 Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.511106 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7bpsz" event={"ID":"93e471b4-0f7f-4216-8f9c-911f21b64e1e","Type":"ContainerDied","Data":"6c3e909a5c1d539466ff95169b2a61636dd2e41a3596e08414bd64392d29dd9f"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.511147 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a5cdc1e1f460fc68836a837b81dca1dec0597e760917853b09087a008ecdf8cb"} Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.511606 4955 scope.go:117] "RemoveContainer" containerID="6c3e909a5c1d539466ff95169b2a61636dd2e41a3596e08414bd64392d29dd9f" Feb 02 13:11:52 crc kubenswrapper[4955]: E0202 13:11:52.511875 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-7bpsz_openshift-multus(93e471b4-0f7f-4216-8f9c-911f21b64e1e)\"" pod="openshift-multus/multus-7bpsz" podUID="93e471b4-0f7f-4216-8f9c-911f21b64e1e" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.534196 4955 scope.go:117] "RemoveContainer" containerID="5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.558443 4955 scope.go:117] "RemoveContainer" containerID="9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.585227 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z2cps"] Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.588406 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z2cps"] Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.598863 4955 scope.go:117] "RemoveContainer" containerID="7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.620957 4955 scope.go:117] "RemoveContainer" containerID="a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.636404 4955 scope.go:117] "RemoveContainer" containerID="f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.647493 4955 scope.go:117] "RemoveContainer" containerID="2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.659851 4955 scope.go:117] "RemoveContainer" containerID="06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.676076 4955 scope.go:117] "RemoveContainer" containerID="f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.706863 4955 scope.go:117] "RemoveContainer" containerID="830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.720565 4955 scope.go:117] "RemoveContainer" containerID="2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381" Feb 02 13:11:52 crc kubenswrapper[4955]: E0202 13:11:52.721027 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381\": container with ID starting with 2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381 not found: ID does not exist" containerID="2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.721070 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381"} err="failed to get container status \"2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381\": rpc error: code = NotFound desc = could not find container \"2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381\": container with ID starting with 2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.721101 4955 scope.go:117] "RemoveContainer" containerID="5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e" Feb 02 13:11:52 crc kubenswrapper[4955]: E0202 13:11:52.721616 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e\": container with ID starting with 5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e not found: ID does not exist" containerID="5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.721681 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e"} err="failed to get container status \"5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e\": rpc error: code = NotFound desc = could not find container \"5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e\": container with ID starting with 5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.721713 4955 scope.go:117] "RemoveContainer" containerID="9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4" Feb 02 13:11:52 crc kubenswrapper[4955]: E0202 13:11:52.722262 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\": container with ID starting with 9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4 not found: ID does not exist" containerID="9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.722301 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4"} err="failed to get container status \"9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\": rpc error: code = NotFound desc = could not find container \"9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\": container with ID starting with 9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.722321 4955 scope.go:117] "RemoveContainer" containerID="7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6" Feb 02 13:11:52 crc kubenswrapper[4955]: E0202 13:11:52.722684 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\": container with ID starting with 7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6 not found: ID does not exist" containerID="7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.722707 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6"} err="failed to get container status \"7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\": rpc error: code = NotFound desc = could not find container \"7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\": container with ID starting with 7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.722721 4955 scope.go:117] "RemoveContainer" containerID="a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411" Feb 02 13:11:52 crc kubenswrapper[4955]: E0202 13:11:52.722963 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\": container with ID starting with a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411 not found: ID does not exist" containerID="a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.722986 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411"} err="failed to get container status \"a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\": rpc error: code = NotFound desc = could not find container \"a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\": container with ID starting with a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.722999 4955 scope.go:117] "RemoveContainer" containerID="f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36" Feb 02 13:11:52 crc kubenswrapper[4955]: E0202 13:11:52.723380 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\": container with ID starting with f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36 not found: ID does not exist" containerID="f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.723407 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36"} err="failed to get container status \"f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\": rpc error: code = NotFound desc = could not find container \"f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\": container with ID starting with f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.723420 4955 scope.go:117] "RemoveContainer" containerID="2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e" Feb 02 13:11:52 crc kubenswrapper[4955]: E0202 13:11:52.723689 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\": container with ID starting with 2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e not found: ID does not exist" containerID="2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.723714 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e"} err="failed to get container status \"2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\": rpc error: code = NotFound desc = could not find container \"2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\": container with ID starting with 2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.723729 4955 scope.go:117] "RemoveContainer" containerID="06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b" Feb 02 13:11:52 crc kubenswrapper[4955]: E0202 13:11:52.723965 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\": container with ID starting with 06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b not found: ID does not exist" containerID="06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.724003 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b"} err="failed to get container status \"06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\": rpc error: code = NotFound desc = could not find container \"06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\": container with ID starting with 06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.724029 4955 scope.go:117] "RemoveContainer" containerID="f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862" Feb 02 13:11:52 crc kubenswrapper[4955]: E0202 13:11:52.724248 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\": container with ID starting with f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862 not found: ID does not exist" containerID="f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.724276 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862"} err="failed to get container status \"f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\": rpc error: code = NotFound desc = could not find container \"f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\": container with ID starting with f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.724293 4955 scope.go:117] "RemoveContainer" containerID="830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14" Feb 02 13:11:52 crc kubenswrapper[4955]: E0202 13:11:52.724501 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\": container with ID starting with 830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14 not found: ID does not exist" containerID="830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.724528 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14"} err="failed to get container status \"830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\": rpc error: code = NotFound desc = could not find container \"830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\": container with ID starting with 830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.724547 4955 scope.go:117] "RemoveContainer" containerID="2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.724750 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381"} err="failed to get container status \"2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381\": rpc error: code = NotFound desc = could not find container \"2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381\": container with ID starting with 2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.724774 4955 scope.go:117] "RemoveContainer" containerID="5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.724974 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e"} err="failed to get container status \"5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e\": rpc error: code = NotFound desc = could not find container \"5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e\": container with ID starting with 5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.724998 4955 scope.go:117] "RemoveContainer" containerID="9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.725237 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4"} err="failed to get container status \"9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\": rpc error: code = NotFound desc = could not find container \"9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\": container with ID starting with 9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.725269 4955 scope.go:117] "RemoveContainer" containerID="7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.725572 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6"} err="failed to get container status \"7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\": rpc error: code = NotFound desc = could not find container \"7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\": container with ID starting with 7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.725599 4955 scope.go:117] "RemoveContainer" containerID="a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.726019 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411"} err="failed to get container status \"a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\": rpc error: code = NotFound desc = could not find container \"a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\": container with ID starting with a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.726043 4955 scope.go:117] "RemoveContainer" containerID="f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.726738 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36"} err="failed to get container status \"f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\": rpc error: code = NotFound desc = could not find container \"f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\": container with ID starting with f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.726761 4955 scope.go:117] "RemoveContainer" containerID="2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.726965 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e"} err="failed to get container status \"2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\": rpc error: code = NotFound desc = could not find container \"2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\": container with ID starting with 2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.726991 4955 scope.go:117] "RemoveContainer" containerID="06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.727577 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b"} err="failed to get container status \"06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\": rpc error: code = NotFound desc = could not find container \"06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\": container with ID starting with 06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.727600 4955 scope.go:117] "RemoveContainer" containerID="f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.727877 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862"} err="failed to get container status \"f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\": rpc error: code = NotFound desc = could not find container \"f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\": container with ID starting with f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.727905 4955 scope.go:117] "RemoveContainer" containerID="830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.728120 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14"} err="failed to get container status \"830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\": rpc error: code = NotFound desc = could not find container \"830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\": container with ID starting with 830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.728146 4955 scope.go:117] "RemoveContainer" containerID="2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.728374 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381"} err="failed to get container status \"2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381\": rpc error: code = NotFound desc = could not find container \"2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381\": container with ID starting with 2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.728396 4955 scope.go:117] "RemoveContainer" containerID="5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.728639 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e"} err="failed to get container status \"5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e\": rpc error: code = NotFound desc = could not find container \"5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e\": container with ID starting with 5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.728674 4955 scope.go:117] "RemoveContainer" containerID="9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.728886 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4"} err="failed to get container status \"9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\": rpc error: code = NotFound desc = could not find container \"9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\": container with ID starting with 9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.728923 4955 scope.go:117] "RemoveContainer" containerID="7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.729184 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6"} err="failed to get container status \"7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\": rpc error: code = NotFound desc = could not find container \"7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\": container with ID starting with 7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.729214 4955 scope.go:117] "RemoveContainer" containerID="a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.729505 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411"} err="failed to get container status \"a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\": rpc error: code = NotFound desc = could not find container \"a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\": container with ID starting with a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.729544 4955 scope.go:117] "RemoveContainer" containerID="f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.729899 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36"} err="failed to get container status \"f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\": rpc error: code = NotFound desc = could not find container \"f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\": container with ID starting with f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.729927 4955 scope.go:117] "RemoveContainer" containerID="2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.730199 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e"} err="failed to get container status \"2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\": rpc error: code = NotFound desc = could not find container \"2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\": container with ID starting with 2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.730232 4955 scope.go:117] "RemoveContainer" containerID="06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.730509 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b"} err="failed to get container status \"06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\": rpc error: code = NotFound desc = could not find container \"06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\": container with ID starting with 06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.730548 4955 scope.go:117] "RemoveContainer" containerID="f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.730819 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862"} err="failed to get container status \"f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\": rpc error: code = NotFound desc = could not find container \"f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\": container with ID starting with f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.730845 4955 scope.go:117] "RemoveContainer" containerID="830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.731124 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14"} err="failed to get container status \"830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\": rpc error: code = NotFound desc = could not find container \"830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\": container with ID starting with 830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.731156 4955 scope.go:117] "RemoveContainer" containerID="2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.731574 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381"} err="failed to get container status \"2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381\": rpc error: code = NotFound desc = could not find container \"2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381\": container with ID starting with 2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.731597 4955 scope.go:117] "RemoveContainer" containerID="5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.731839 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e"} err="failed to get container status \"5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e\": rpc error: code = NotFound desc = could not find container \"5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e\": container with ID starting with 5dc7d275e1c79684de57d0c067340e52d4b462690eeba4f5a6edb2589f3fed9e not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.731867 4955 scope.go:117] "RemoveContainer" containerID="9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.732085 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4"} err="failed to get container status \"9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\": rpc error: code = NotFound desc = could not find container \"9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4\": container with ID starting with 9998af9c5680d935300f9215d7576ead9292fe138af70476bcedcc5e4eadafa4 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.732113 4955 scope.go:117] "RemoveContainer" containerID="7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.732287 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6"} err="failed to get container status \"7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\": rpc error: code = NotFound desc = could not find container \"7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6\": container with ID starting with 7461e5df1d16b582ff53d61a98d43d11d4d0e8f56a1020355179de09aaec3aa6 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.732305 4955 scope.go:117] "RemoveContainer" containerID="a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.732482 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411"} err="failed to get container status \"a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\": rpc error: code = NotFound desc = could not find container \"a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411\": container with ID starting with a6b67988e35eb426355ffed216526fe5bf29d105bd4b36b1d140d77654702411 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.732500 4955 scope.go:117] "RemoveContainer" containerID="f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.732699 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36"} err="failed to get container status \"f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\": rpc error: code = NotFound desc = could not find container \"f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36\": container with ID starting with f38be77c77f25a4bef1bb6e987f02b54f8f19bade4c62abeb28e3861f41efe36 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.732718 4955 scope.go:117] "RemoveContainer" containerID="2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.732898 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e"} err="failed to get container status \"2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\": rpc error: code = NotFound desc = could not find container \"2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e\": container with ID starting with 2f1fa6091ff3a2fb5e1f4d06f1da5da98349c729054b20801b99215627bfb45e not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.732915 4955 scope.go:117] "RemoveContainer" containerID="06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.733089 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b"} err="failed to get container status \"06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\": rpc error: code = NotFound desc = could not find container \"06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b\": container with ID starting with 06158089dda674c870d3e6ad1e673e6bb2d78abf91ce1c1599bbe92566254e4b not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.733106 4955 scope.go:117] "RemoveContainer" containerID="f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.733314 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862"} err="failed to get container status \"f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\": rpc error: code = NotFound desc = could not find container \"f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862\": container with ID starting with f2b108db75cddb34b8e98820677db28bc9707893c517633a0f36b4ad7dc79862 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.733331 4955 scope.go:117] "RemoveContainer" containerID="830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.733523 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14"} err="failed to get container status \"830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\": rpc error: code = NotFound desc = could not find container \"830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14\": container with ID starting with 830b360ffb70d6989697f3c5f030d64137ef0fac9f080ce5a3b5ed0081ae1e14 not found: ID does not exist" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.733567 4955 scope.go:117] "RemoveContainer" containerID="2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381" Feb 02 13:11:52 crc kubenswrapper[4955]: I0202 13:11:52.733765 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381"} err="failed to get container status \"2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381\": rpc error: code = NotFound desc = could not find container \"2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381\": container with ID starting with 2362419d17ea57643090c74d83023589f7f4a66f5cbb5516586483cc6e47b381 not found: ID does not exist" Feb 02 13:11:53 crc kubenswrapper[4955]: I0202 13:11:53.520270 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" event={"ID":"907addaf-fd40-44aa-9a8f-4457d984bdad","Type":"ContainerStarted","Data":"642a162bf9d8b8ad475786ba8c7003a1e3e6a07daf827fc0b098ca4aba0b4eeb"} Feb 02 13:11:53 crc kubenswrapper[4955]: I0202 13:11:53.520543 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" event={"ID":"907addaf-fd40-44aa-9a8f-4457d984bdad","Type":"ContainerStarted","Data":"b6ce70ac7a2afaecf56ee2e2c9dba1680d404ed7bb33df293a03f17e3dd736e9"} Feb 02 13:11:53 crc kubenswrapper[4955]: I0202 13:11:53.520578 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" event={"ID":"907addaf-fd40-44aa-9a8f-4457d984bdad","Type":"ContainerStarted","Data":"c221a8c02637bdab7314ddff25a8122d418f170c0dcffd44f739250586f8e598"} Feb 02 13:11:53 crc kubenswrapper[4955]: I0202 13:11:53.520589 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" event={"ID":"907addaf-fd40-44aa-9a8f-4457d984bdad","Type":"ContainerStarted","Data":"80177fe81a4ec8ecfa346c1766d2e586b7e92c0e31b98dd6f464809ed6b9dba2"} Feb 02 13:11:53 crc kubenswrapper[4955]: I0202 13:11:53.520596 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" event={"ID":"907addaf-fd40-44aa-9a8f-4457d984bdad","Type":"ContainerStarted","Data":"4a3ae8a62d40f7ec37f41c8652292fdfe1d2ba6bc08485855c4ae1233a1c1ac6"} Feb 02 13:11:53 crc kubenswrapper[4955]: I0202 13:11:53.520605 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" event={"ID":"907addaf-fd40-44aa-9a8f-4457d984bdad","Type":"ContainerStarted","Data":"33b73bce79d7394d6be073edc2fb5591ada73dc40d5ce7411c82f0ec80434d73"} Feb 02 13:11:53 crc kubenswrapper[4955]: I0202 13:11:53.723267 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0d35d22-ea6a-4ada-a086-b199c153c940" path="/var/lib/kubelet/pods/e0d35d22-ea6a-4ada-a086-b199c153c940/volumes" Feb 02 13:11:56 crc kubenswrapper[4955]: I0202 13:11:56.550933 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" event={"ID":"907addaf-fd40-44aa-9a8f-4457d984bdad","Type":"ContainerStarted","Data":"8b670ad2f5fe258a7449a6f595bacadf592c85abb45e75735e3a220a6092beb0"} Feb 02 13:11:58 crc kubenswrapper[4955]: I0202 13:11:58.564448 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" event={"ID":"907addaf-fd40-44aa-9a8f-4457d984bdad","Type":"ContainerStarted","Data":"5a66cd5cbfae6fe4adb70045bd4abace76fc661d35bc1ae310005043a9be784d"} Feb 02 13:11:58 crc kubenswrapper[4955]: I0202 13:11:58.564927 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:58 crc kubenswrapper[4955]: I0202 13:11:58.564948 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:58 crc kubenswrapper[4955]: I0202 13:11:58.564963 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:58 crc kubenswrapper[4955]: I0202 13:11:58.592260 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:11:58 crc kubenswrapper[4955]: I0202 13:11:58.595254 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" podStartSLOduration=7.595237638 podStartE2EDuration="7.595237638s" podCreationTimestamp="2026-02-02 13:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:11:58.59321065 +0000 UTC m=+569.505547110" watchObservedRunningTime="2026-02-02 13:11:58.595237638 +0000 UTC m=+569.507574088" Feb 02 13:11:58 crc kubenswrapper[4955]: I0202 13:11:58.604688 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:12:03 crc kubenswrapper[4955]: I0202 13:12:03.017230 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:12:03 crc kubenswrapper[4955]: I0202 13:12:03.018808 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:12:03 crc kubenswrapper[4955]: I0202 13:12:03.018999 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:12:03 crc kubenswrapper[4955]: I0202 13:12:03.019802 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"15ec1b1ba75d775d8ebc23447ae7b707fd98515f86f18a1cbc9275eaecb69192"} pod="openshift-machine-config-operator/machine-config-daemon-6l62h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:12:03 crc kubenswrapper[4955]: I0202 13:12:03.019986 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" containerID="cri-o://15ec1b1ba75d775d8ebc23447ae7b707fd98515f86f18a1cbc9275eaecb69192" gracePeriod=600 Feb 02 13:12:03 crc kubenswrapper[4955]: I0202 13:12:03.598248 4955 generic.go:334] "Generic (PLEG): container finished" podID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerID="15ec1b1ba75d775d8ebc23447ae7b707fd98515f86f18a1cbc9275eaecb69192" exitCode=0 Feb 02 13:12:03 crc kubenswrapper[4955]: I0202 13:12:03.598322 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerDied","Data":"15ec1b1ba75d775d8ebc23447ae7b707fd98515f86f18a1cbc9275eaecb69192"} Feb 02 13:12:03 crc kubenswrapper[4955]: I0202 13:12:03.598689 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerStarted","Data":"c5c24a4cc614a40516e0f58b2e903db33d318f5c1611a533b9d33712af008fe5"} Feb 02 13:12:03 crc kubenswrapper[4955]: I0202 13:12:03.598710 4955 scope.go:117] "RemoveContainer" containerID="e2b066fe3d22716e67cada877eecd7854555a99c0cda44ff4824ac9dad20f74b" Feb 02 13:12:06 crc kubenswrapper[4955]: I0202 13:12:06.716434 4955 scope.go:117] "RemoveContainer" containerID="6c3e909a5c1d539466ff95169b2a61636dd2e41a3596e08414bd64392d29dd9f" Feb 02 13:12:06 crc kubenswrapper[4955]: E0202 13:12:06.717256 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-7bpsz_openshift-multus(93e471b4-0f7f-4216-8f9c-911f21b64e1e)\"" pod="openshift-multus/multus-7bpsz" podUID="93e471b4-0f7f-4216-8f9c-911f21b64e1e" Feb 02 13:12:19 crc kubenswrapper[4955]: I0202 13:12:19.719886 4955 scope.go:117] "RemoveContainer" containerID="6c3e909a5c1d539466ff95169b2a61636dd2e41a3596e08414bd64392d29dd9f" Feb 02 13:12:20 crc kubenswrapper[4955]: I0202 13:12:20.712299 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7bpsz_93e471b4-0f7f-4216-8f9c-911f21b64e1e/kube-multus/2.log" Feb 02 13:12:20 crc kubenswrapper[4955]: I0202 13:12:20.712921 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7bpsz_93e471b4-0f7f-4216-8f9c-911f21b64e1e/kube-multus/1.log" Feb 02 13:12:20 crc kubenswrapper[4955]: I0202 13:12:20.712950 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7bpsz" event={"ID":"93e471b4-0f7f-4216-8f9c-911f21b64e1e","Type":"ContainerStarted","Data":"890d1331264ac1fa57316ff7fd3a11bb95b1b5a53c84b7991647fac568d0045f"} Feb 02 13:12:22 crc kubenswrapper[4955]: I0202 13:12:22.187284 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g7z6n" Feb 02 13:12:25 crc kubenswrapper[4955]: I0202 13:12:25.131835 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v"] Feb 02 13:12:25 crc kubenswrapper[4955]: I0202 13:12:25.134164 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v" Feb 02 13:12:25 crc kubenswrapper[4955]: I0202 13:12:25.139921 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 13:12:25 crc kubenswrapper[4955]: I0202 13:12:25.143141 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v"] Feb 02 13:12:25 crc kubenswrapper[4955]: I0202 13:12:25.167706 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhrdm\" (UniqueName: \"kubernetes.io/projected/4dc45f9b-da78-485e-baf2-87572c369b4f-kube-api-access-nhrdm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v\" (UID: \"4dc45f9b-da78-485e-baf2-87572c369b4f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v" Feb 02 13:12:25 crc kubenswrapper[4955]: I0202 13:12:25.168072 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4dc45f9b-da78-485e-baf2-87572c369b4f-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v\" (UID: \"4dc45f9b-da78-485e-baf2-87572c369b4f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v" Feb 02 13:12:25 crc kubenswrapper[4955]: I0202 13:12:25.168196 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4dc45f9b-da78-485e-baf2-87572c369b4f-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v\" (UID: \"4dc45f9b-da78-485e-baf2-87572c369b4f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v" Feb 02 13:12:25 crc kubenswrapper[4955]: I0202 13:12:25.269458 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhrdm\" (UniqueName: \"kubernetes.io/projected/4dc45f9b-da78-485e-baf2-87572c369b4f-kube-api-access-nhrdm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v\" (UID: \"4dc45f9b-da78-485e-baf2-87572c369b4f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v" Feb 02 13:12:25 crc kubenswrapper[4955]: I0202 13:12:25.269819 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4dc45f9b-da78-485e-baf2-87572c369b4f-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v\" (UID: \"4dc45f9b-da78-485e-baf2-87572c369b4f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v" Feb 02 13:12:25 crc kubenswrapper[4955]: I0202 13:12:25.269956 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4dc45f9b-da78-485e-baf2-87572c369b4f-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v\" (UID: \"4dc45f9b-da78-485e-baf2-87572c369b4f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v" Feb 02 13:12:25 crc kubenswrapper[4955]: I0202 13:12:25.270261 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4dc45f9b-da78-485e-baf2-87572c369b4f-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v\" (UID: \"4dc45f9b-da78-485e-baf2-87572c369b4f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v" Feb 02 13:12:25 crc kubenswrapper[4955]: I0202 13:12:25.270464 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4dc45f9b-da78-485e-baf2-87572c369b4f-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v\" (UID: \"4dc45f9b-da78-485e-baf2-87572c369b4f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v" Feb 02 13:12:25 crc kubenswrapper[4955]: I0202 13:12:25.287495 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhrdm\" (UniqueName: \"kubernetes.io/projected/4dc45f9b-da78-485e-baf2-87572c369b4f-kube-api-access-nhrdm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v\" (UID: \"4dc45f9b-da78-485e-baf2-87572c369b4f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v" Feb 02 13:12:25 crc kubenswrapper[4955]: I0202 13:12:25.451146 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v" Feb 02 13:12:25 crc kubenswrapper[4955]: I0202 13:12:25.693841 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v"] Feb 02 13:12:25 crc kubenswrapper[4955]: I0202 13:12:25.737582 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v" event={"ID":"4dc45f9b-da78-485e-baf2-87572c369b4f","Type":"ContainerStarted","Data":"a3b3d9736b6154dfc7a9da50b39aded73c644c6ac010aa8a760f13fece1ef5df"} Feb 02 13:12:26 crc kubenswrapper[4955]: I0202 13:12:26.743778 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v" event={"ID":"4dc45f9b-da78-485e-baf2-87572c369b4f","Type":"ContainerStarted","Data":"b9cc5680499598814dd72471aa15d7bc754d7f1ead9ea5bd26af6dd61800e969"} Feb 02 13:12:27 crc kubenswrapper[4955]: I0202 13:12:27.749490 4955 generic.go:334] "Generic (PLEG): container finished" podID="4dc45f9b-da78-485e-baf2-87572c369b4f" containerID="b9cc5680499598814dd72471aa15d7bc754d7f1ead9ea5bd26af6dd61800e969" exitCode=0 Feb 02 13:12:27 crc kubenswrapper[4955]: I0202 13:12:27.749535 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v" event={"ID":"4dc45f9b-da78-485e-baf2-87572c369b4f","Type":"ContainerDied","Data":"b9cc5680499598814dd72471aa15d7bc754d7f1ead9ea5bd26af6dd61800e969"} Feb 02 13:12:29 crc kubenswrapper[4955]: I0202 13:12:29.939976 4955 scope.go:117] "RemoveContainer" containerID="a5cdc1e1f460fc68836a837b81dca1dec0597e760917853b09087a008ecdf8cb" Feb 02 13:12:30 crc kubenswrapper[4955]: I0202 13:12:30.766147 4955 generic.go:334] "Generic (PLEG): container finished" podID="4dc45f9b-da78-485e-baf2-87572c369b4f" containerID="c1675377d433d9f15c4f6765ad48007019766c74452162a59ba5badc8c697698" exitCode=0 Feb 02 13:12:30 crc kubenswrapper[4955]: I0202 13:12:30.766234 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v" event={"ID":"4dc45f9b-da78-485e-baf2-87572c369b4f","Type":"ContainerDied","Data":"c1675377d433d9f15c4f6765ad48007019766c74452162a59ba5badc8c697698"} Feb 02 13:12:30 crc kubenswrapper[4955]: I0202 13:12:30.768411 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7bpsz_93e471b4-0f7f-4216-8f9c-911f21b64e1e/kube-multus/2.log" Feb 02 13:12:31 crc kubenswrapper[4955]: I0202 13:12:31.776780 4955 generic.go:334] "Generic (PLEG): container finished" podID="4dc45f9b-da78-485e-baf2-87572c369b4f" containerID="8303c71beac2b6347b4d60c4e1567b9c37e582507dcf44b0548e8f8cb073a432" exitCode=0 Feb 02 13:12:31 crc kubenswrapper[4955]: I0202 13:12:31.776902 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v" event={"ID":"4dc45f9b-da78-485e-baf2-87572c369b4f","Type":"ContainerDied","Data":"8303c71beac2b6347b4d60c4e1567b9c37e582507dcf44b0548e8f8cb073a432"} Feb 02 13:12:33 crc kubenswrapper[4955]: I0202 13:12:33.005825 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v" Feb 02 13:12:33 crc kubenswrapper[4955]: I0202 13:12:33.061916 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4dc45f9b-da78-485e-baf2-87572c369b4f-bundle\") pod \"4dc45f9b-da78-485e-baf2-87572c369b4f\" (UID: \"4dc45f9b-da78-485e-baf2-87572c369b4f\") " Feb 02 13:12:33 crc kubenswrapper[4955]: I0202 13:12:33.062079 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhrdm\" (UniqueName: \"kubernetes.io/projected/4dc45f9b-da78-485e-baf2-87572c369b4f-kube-api-access-nhrdm\") pod \"4dc45f9b-da78-485e-baf2-87572c369b4f\" (UID: \"4dc45f9b-da78-485e-baf2-87572c369b4f\") " Feb 02 13:12:33 crc kubenswrapper[4955]: I0202 13:12:33.062185 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4dc45f9b-da78-485e-baf2-87572c369b4f-util\") pod \"4dc45f9b-da78-485e-baf2-87572c369b4f\" (UID: \"4dc45f9b-da78-485e-baf2-87572c369b4f\") " Feb 02 13:12:33 crc kubenswrapper[4955]: I0202 13:12:33.063039 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dc45f9b-da78-485e-baf2-87572c369b4f-bundle" (OuterVolumeSpecName: "bundle") pod "4dc45f9b-da78-485e-baf2-87572c369b4f" (UID: "4dc45f9b-da78-485e-baf2-87572c369b4f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:12:33 crc kubenswrapper[4955]: I0202 13:12:33.067531 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dc45f9b-da78-485e-baf2-87572c369b4f-kube-api-access-nhrdm" (OuterVolumeSpecName: "kube-api-access-nhrdm") pod "4dc45f9b-da78-485e-baf2-87572c369b4f" (UID: "4dc45f9b-da78-485e-baf2-87572c369b4f"). InnerVolumeSpecName "kube-api-access-nhrdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:12:33 crc kubenswrapper[4955]: I0202 13:12:33.074223 4955 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4dc45f9b-da78-485e-baf2-87572c369b4f-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:33 crc kubenswrapper[4955]: I0202 13:12:33.074263 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhrdm\" (UniqueName: \"kubernetes.io/projected/4dc45f9b-da78-485e-baf2-87572c369b4f-kube-api-access-nhrdm\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:33 crc kubenswrapper[4955]: I0202 13:12:33.078720 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dc45f9b-da78-485e-baf2-87572c369b4f-util" (OuterVolumeSpecName: "util") pod "4dc45f9b-da78-485e-baf2-87572c369b4f" (UID: "4dc45f9b-da78-485e-baf2-87572c369b4f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:12:33 crc kubenswrapper[4955]: I0202 13:12:33.175364 4955 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4dc45f9b-da78-485e-baf2-87572c369b4f-util\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:33 crc kubenswrapper[4955]: I0202 13:12:33.789074 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v" event={"ID":"4dc45f9b-da78-485e-baf2-87572c369b4f","Type":"ContainerDied","Data":"a3b3d9736b6154dfc7a9da50b39aded73c644c6ac010aa8a760f13fece1ef5df"} Feb 02 13:12:33 crc kubenswrapper[4955]: I0202 13:12:33.789120 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3b3d9736b6154dfc7a9da50b39aded73c644c6ac010aa8a760f13fece1ef5df" Feb 02 13:12:33 crc kubenswrapper[4955]: I0202 13:12:33.789138 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v" Feb 02 13:12:36 crc kubenswrapper[4955]: I0202 13:12:36.456432 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dnzw4"] Feb 02 13:12:36 crc kubenswrapper[4955]: E0202 13:12:36.456997 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dc45f9b-da78-485e-baf2-87572c369b4f" containerName="util" Feb 02 13:12:36 crc kubenswrapper[4955]: I0202 13:12:36.457012 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dc45f9b-da78-485e-baf2-87572c369b4f" containerName="util" Feb 02 13:12:36 crc kubenswrapper[4955]: E0202 13:12:36.457024 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dc45f9b-da78-485e-baf2-87572c369b4f" containerName="extract" Feb 02 13:12:36 crc kubenswrapper[4955]: I0202 13:12:36.457031 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dc45f9b-da78-485e-baf2-87572c369b4f" containerName="extract" Feb 02 13:12:36 crc kubenswrapper[4955]: E0202 13:12:36.457045 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dc45f9b-da78-485e-baf2-87572c369b4f" containerName="pull" Feb 02 13:12:36 crc kubenswrapper[4955]: I0202 13:12:36.457055 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dc45f9b-da78-485e-baf2-87572c369b4f" containerName="pull" Feb 02 13:12:36 crc kubenswrapper[4955]: I0202 13:12:36.457211 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dc45f9b-da78-485e-baf2-87572c369b4f" containerName="extract" Feb 02 13:12:36 crc kubenswrapper[4955]: I0202 13:12:36.457716 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-dnzw4" Feb 02 13:12:36 crc kubenswrapper[4955]: I0202 13:12:36.459168 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 02 13:12:36 crc kubenswrapper[4955]: I0202 13:12:36.459407 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 02 13:12:36 crc kubenswrapper[4955]: I0202 13:12:36.464387 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-bj6b7" Feb 02 13:12:36 crc kubenswrapper[4955]: I0202 13:12:36.473443 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dnzw4"] Feb 02 13:12:36 crc kubenswrapper[4955]: I0202 13:12:36.515043 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkhgg\" (UniqueName: \"kubernetes.io/projected/e8f7b05a-acde-41cb-abf2-ea46b972fe09-kube-api-access-wkhgg\") pod \"nmstate-operator-646758c888-dnzw4\" (UID: \"e8f7b05a-acde-41cb-abf2-ea46b972fe09\") " pod="openshift-nmstate/nmstate-operator-646758c888-dnzw4" Feb 02 13:12:36 crc kubenswrapper[4955]: I0202 13:12:36.616405 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkhgg\" (UniqueName: \"kubernetes.io/projected/e8f7b05a-acde-41cb-abf2-ea46b972fe09-kube-api-access-wkhgg\") pod \"nmstate-operator-646758c888-dnzw4\" (UID: \"e8f7b05a-acde-41cb-abf2-ea46b972fe09\") " pod="openshift-nmstate/nmstate-operator-646758c888-dnzw4" Feb 02 13:12:36 crc kubenswrapper[4955]: I0202 13:12:36.644815 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkhgg\" (UniqueName: \"kubernetes.io/projected/e8f7b05a-acde-41cb-abf2-ea46b972fe09-kube-api-access-wkhgg\") pod \"nmstate-operator-646758c888-dnzw4\" (UID: \"e8f7b05a-acde-41cb-abf2-ea46b972fe09\") " pod="openshift-nmstate/nmstate-operator-646758c888-dnzw4" Feb 02 13:12:36 crc kubenswrapper[4955]: I0202 13:12:36.773827 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-dnzw4" Feb 02 13:12:37 crc kubenswrapper[4955]: I0202 13:12:37.189063 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dnzw4"] Feb 02 13:12:37 crc kubenswrapper[4955]: I0202 13:12:37.814282 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-dnzw4" event={"ID":"e8f7b05a-acde-41cb-abf2-ea46b972fe09","Type":"ContainerStarted","Data":"78a38466c8bf8abe35bd34106a4757984d036dfd2e4476d64acf59c0c53b368f"} Feb 02 13:12:39 crc kubenswrapper[4955]: I0202 13:12:39.827744 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-dnzw4" event={"ID":"e8f7b05a-acde-41cb-abf2-ea46b972fe09","Type":"ContainerStarted","Data":"bd887e5ea3f345551f072cda54d1ee731435f92318bd142c593008e798043a47"} Feb 02 13:12:39 crc kubenswrapper[4955]: I0202 13:12:39.854987 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-dnzw4" podStartSLOduration=1.956651163 podStartE2EDuration="3.854942535s" podCreationTimestamp="2026-02-02 13:12:36 +0000 UTC" firstStartedPulling="2026-02-02 13:12:37.201980233 +0000 UTC m=+608.114316683" lastFinishedPulling="2026-02-02 13:12:39.100271605 +0000 UTC m=+610.012608055" observedRunningTime="2026-02-02 13:12:39.853197624 +0000 UTC m=+610.765534074" watchObservedRunningTime="2026-02-02 13:12:39.854942535 +0000 UTC m=+610.767278985" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.490093 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-gqqzg"] Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.491240 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-gqqzg" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.499851 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-gqqzg"] Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.506237 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-grsjj" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.509847 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-5kxd7"] Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.510503 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5kxd7" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.518900 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.546511 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-ztws6"] Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.547953 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ztws6" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.594306 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-5kxd7"] Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.635423 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-hpkxf"] Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.636138 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hpkxf" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.640694 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.640711 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-g2ngm" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.640870 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.646387 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-hpkxf"] Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.658521 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/af032e86-91c3-4253-b263-3aab67e04b81-dbus-socket\") pod \"nmstate-handler-ztws6\" (UID: \"af032e86-91c3-4253-b263-3aab67e04b81\") " pod="openshift-nmstate/nmstate-handler-ztws6" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.658616 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwpjv\" (UniqueName: \"kubernetes.io/projected/861649ff-e626-4bd9-84ae-350085318d2e-kube-api-access-qwpjv\") pod \"nmstate-webhook-8474b5b9d8-5kxd7\" (UID: \"861649ff-e626-4bd9-84ae-350085318d2e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5kxd7" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.658737 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/af032e86-91c3-4253-b263-3aab67e04b81-nmstate-lock\") pod \"nmstate-handler-ztws6\" (UID: \"af032e86-91c3-4253-b263-3aab67e04b81\") " pod="openshift-nmstate/nmstate-handler-ztws6" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.658768 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7jwx\" (UniqueName: \"kubernetes.io/projected/af032e86-91c3-4253-b263-3aab67e04b81-kube-api-access-p7jwx\") pod \"nmstate-handler-ztws6\" (UID: \"af032e86-91c3-4253-b263-3aab67e04b81\") " pod="openshift-nmstate/nmstate-handler-ztws6" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.658824 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/af032e86-91c3-4253-b263-3aab67e04b81-ovs-socket\") pod \"nmstate-handler-ztws6\" (UID: \"af032e86-91c3-4253-b263-3aab67e04b81\") " pod="openshift-nmstate/nmstate-handler-ztws6" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.658856 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhfpq\" (UniqueName: \"kubernetes.io/projected/2cad4ce5-2270-4724-b134-9d96c52a68ab-kube-api-access-xhfpq\") pod \"nmstate-metrics-54757c584b-gqqzg\" (UID: \"2cad4ce5-2270-4724-b134-9d96c52a68ab\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-gqqzg" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.658876 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/861649ff-e626-4bd9-84ae-350085318d2e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-5kxd7\" (UID: \"861649ff-e626-4bd9-84ae-350085318d2e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5kxd7" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.760689 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/34e639ba-ce03-41b0-a48f-04c959db2204-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-hpkxf\" (UID: \"34e639ba-ce03-41b0-a48f-04c959db2204\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hpkxf" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.760785 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/34e639ba-ce03-41b0-a48f-04c959db2204-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-hpkxf\" (UID: \"34e639ba-ce03-41b0-a48f-04c959db2204\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hpkxf" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.760823 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/af032e86-91c3-4253-b263-3aab67e04b81-ovs-socket\") pod \"nmstate-handler-ztws6\" (UID: \"af032e86-91c3-4253-b263-3aab67e04b81\") " pod="openshift-nmstate/nmstate-handler-ztws6" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.760854 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhfpq\" (UniqueName: \"kubernetes.io/projected/2cad4ce5-2270-4724-b134-9d96c52a68ab-kube-api-access-xhfpq\") pod \"nmstate-metrics-54757c584b-gqqzg\" (UID: \"2cad4ce5-2270-4724-b134-9d96c52a68ab\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-gqqzg" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.760879 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/861649ff-e626-4bd9-84ae-350085318d2e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-5kxd7\" (UID: \"861649ff-e626-4bd9-84ae-350085318d2e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5kxd7" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.760904 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/af032e86-91c3-4253-b263-3aab67e04b81-dbus-socket\") pod \"nmstate-handler-ztws6\" (UID: \"af032e86-91c3-4253-b263-3aab67e04b81\") " pod="openshift-nmstate/nmstate-handler-ztws6" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.761140 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7xpx\" (UniqueName: \"kubernetes.io/projected/34e639ba-ce03-41b0-a48f-04c959db2204-kube-api-access-r7xpx\") pod \"nmstate-console-plugin-7754f76f8b-hpkxf\" (UID: \"34e639ba-ce03-41b0-a48f-04c959db2204\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hpkxf" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.761165 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwpjv\" (UniqueName: \"kubernetes.io/projected/861649ff-e626-4bd9-84ae-350085318d2e-kube-api-access-qwpjv\") pod \"nmstate-webhook-8474b5b9d8-5kxd7\" (UID: \"861649ff-e626-4bd9-84ae-350085318d2e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5kxd7" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.761205 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/af032e86-91c3-4253-b263-3aab67e04b81-nmstate-lock\") pod \"nmstate-handler-ztws6\" (UID: \"af032e86-91c3-4253-b263-3aab67e04b81\") " pod="openshift-nmstate/nmstate-handler-ztws6" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.761229 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7jwx\" (UniqueName: \"kubernetes.io/projected/af032e86-91c3-4253-b263-3aab67e04b81-kube-api-access-p7jwx\") pod \"nmstate-handler-ztws6\" (UID: \"af032e86-91c3-4253-b263-3aab67e04b81\") " pod="openshift-nmstate/nmstate-handler-ztws6" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.761242 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/af032e86-91c3-4253-b263-3aab67e04b81-ovs-socket\") pod \"nmstate-handler-ztws6\" (UID: \"af032e86-91c3-4253-b263-3aab67e04b81\") " pod="openshift-nmstate/nmstate-handler-ztws6" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.761624 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/af032e86-91c3-4253-b263-3aab67e04b81-dbus-socket\") pod \"nmstate-handler-ztws6\" (UID: \"af032e86-91c3-4253-b263-3aab67e04b81\") " pod="openshift-nmstate/nmstate-handler-ztws6" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.761828 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/af032e86-91c3-4253-b263-3aab67e04b81-nmstate-lock\") pod \"nmstate-handler-ztws6\" (UID: \"af032e86-91c3-4253-b263-3aab67e04b81\") " pod="openshift-nmstate/nmstate-handler-ztws6" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.770661 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/861649ff-e626-4bd9-84ae-350085318d2e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-5kxd7\" (UID: \"861649ff-e626-4bd9-84ae-350085318d2e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5kxd7" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.784640 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwpjv\" (UniqueName: \"kubernetes.io/projected/861649ff-e626-4bd9-84ae-350085318d2e-kube-api-access-qwpjv\") pod \"nmstate-webhook-8474b5b9d8-5kxd7\" (UID: \"861649ff-e626-4bd9-84ae-350085318d2e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5kxd7" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.786579 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhfpq\" (UniqueName: \"kubernetes.io/projected/2cad4ce5-2270-4724-b134-9d96c52a68ab-kube-api-access-xhfpq\") pod \"nmstate-metrics-54757c584b-gqqzg\" (UID: \"2cad4ce5-2270-4724-b134-9d96c52a68ab\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-gqqzg" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.790249 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7jwx\" (UniqueName: \"kubernetes.io/projected/af032e86-91c3-4253-b263-3aab67e04b81-kube-api-access-p7jwx\") pod \"nmstate-handler-ztws6\" (UID: \"af032e86-91c3-4253-b263-3aab67e04b81\") " pod="openshift-nmstate/nmstate-handler-ztws6" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.814877 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-gqqzg" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.837870 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5kxd7" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.845038 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-655b6b84f6-zdwff"] Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.848256 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.862633 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7xpx\" (UniqueName: \"kubernetes.io/projected/34e639ba-ce03-41b0-a48f-04c959db2204-kube-api-access-r7xpx\") pod \"nmstate-console-plugin-7754f76f8b-hpkxf\" (UID: \"34e639ba-ce03-41b0-a48f-04c959db2204\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hpkxf" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.862714 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/34e639ba-ce03-41b0-a48f-04c959db2204-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-hpkxf\" (UID: \"34e639ba-ce03-41b0-a48f-04c959db2204\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hpkxf" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.862738 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/34e639ba-ce03-41b0-a48f-04c959db2204-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-hpkxf\" (UID: \"34e639ba-ce03-41b0-a48f-04c959db2204\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hpkxf" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.866180 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/34e639ba-ce03-41b0-a48f-04c959db2204-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-hpkxf\" (UID: \"34e639ba-ce03-41b0-a48f-04c959db2204\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hpkxf" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.867088 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/34e639ba-ce03-41b0-a48f-04c959db2204-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-hpkxf\" (UID: \"34e639ba-ce03-41b0-a48f-04c959db2204\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hpkxf" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.870098 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-655b6b84f6-zdwff"] Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.874845 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ztws6" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.884435 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7xpx\" (UniqueName: \"kubernetes.io/projected/34e639ba-ce03-41b0-a48f-04c959db2204-kube-api-access-r7xpx\") pod \"nmstate-console-plugin-7754f76f8b-hpkxf\" (UID: \"34e639ba-ce03-41b0-a48f-04c959db2204\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hpkxf" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.952794 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hpkxf" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.965522 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bddc534e-f4b2-474d-8ce9-f83aa6006015-oauth-serving-cert\") pod \"console-655b6b84f6-zdwff\" (UID: \"bddc534e-f4b2-474d-8ce9-f83aa6006015\") " pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.965617 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bddc534e-f4b2-474d-8ce9-f83aa6006015-console-serving-cert\") pod \"console-655b6b84f6-zdwff\" (UID: \"bddc534e-f4b2-474d-8ce9-f83aa6006015\") " pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.966445 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bddc534e-f4b2-474d-8ce9-f83aa6006015-service-ca\") pod \"console-655b6b84f6-zdwff\" (UID: \"bddc534e-f4b2-474d-8ce9-f83aa6006015\") " pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.966481 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bddc534e-f4b2-474d-8ce9-f83aa6006015-trusted-ca-bundle\") pod \"console-655b6b84f6-zdwff\" (UID: \"bddc534e-f4b2-474d-8ce9-f83aa6006015\") " pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.966503 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bddc534e-f4b2-474d-8ce9-f83aa6006015-console-oauth-config\") pod \"console-655b6b84f6-zdwff\" (UID: \"bddc534e-f4b2-474d-8ce9-f83aa6006015\") " pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.966526 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bddc534e-f4b2-474d-8ce9-f83aa6006015-console-config\") pod \"console-655b6b84f6-zdwff\" (UID: \"bddc534e-f4b2-474d-8ce9-f83aa6006015\") " pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:45 crc kubenswrapper[4955]: I0202 13:12:45.966542 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s7hc\" (UniqueName: \"kubernetes.io/projected/bddc534e-f4b2-474d-8ce9-f83aa6006015-kube-api-access-8s7hc\") pod \"console-655b6b84f6-zdwff\" (UID: \"bddc534e-f4b2-474d-8ce9-f83aa6006015\") " pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.012734 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-gqqzg"] Feb 02 13:12:46 crc kubenswrapper[4955]: W0202 13:12:46.021383 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cad4ce5_2270_4724_b134_9d96c52a68ab.slice/crio-4a35c2680d885d9f6c07fb77f6f327e3a46b23eb3297b15951516246d9364268 WatchSource:0}: Error finding container 4a35c2680d885d9f6c07fb77f6f327e3a46b23eb3297b15951516246d9364268: Status 404 returned error can't find the container with id 4a35c2680d885d9f6c07fb77f6f327e3a46b23eb3297b15951516246d9364268 Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.068691 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bddc534e-f4b2-474d-8ce9-f83aa6006015-oauth-serving-cert\") pod \"console-655b6b84f6-zdwff\" (UID: \"bddc534e-f4b2-474d-8ce9-f83aa6006015\") " pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.068764 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bddc534e-f4b2-474d-8ce9-f83aa6006015-console-serving-cert\") pod \"console-655b6b84f6-zdwff\" (UID: \"bddc534e-f4b2-474d-8ce9-f83aa6006015\") " pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.068799 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bddc534e-f4b2-474d-8ce9-f83aa6006015-service-ca\") pod \"console-655b6b84f6-zdwff\" (UID: \"bddc534e-f4b2-474d-8ce9-f83aa6006015\") " pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.068825 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bddc534e-f4b2-474d-8ce9-f83aa6006015-trusted-ca-bundle\") pod \"console-655b6b84f6-zdwff\" (UID: \"bddc534e-f4b2-474d-8ce9-f83aa6006015\") " pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.068842 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bddc534e-f4b2-474d-8ce9-f83aa6006015-console-oauth-config\") pod \"console-655b6b84f6-zdwff\" (UID: \"bddc534e-f4b2-474d-8ce9-f83aa6006015\") " pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.068863 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s7hc\" (UniqueName: \"kubernetes.io/projected/bddc534e-f4b2-474d-8ce9-f83aa6006015-kube-api-access-8s7hc\") pod \"console-655b6b84f6-zdwff\" (UID: \"bddc534e-f4b2-474d-8ce9-f83aa6006015\") " pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.068880 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bddc534e-f4b2-474d-8ce9-f83aa6006015-console-config\") pod \"console-655b6b84f6-zdwff\" (UID: \"bddc534e-f4b2-474d-8ce9-f83aa6006015\") " pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.069881 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bddc534e-f4b2-474d-8ce9-f83aa6006015-console-config\") pod \"console-655b6b84f6-zdwff\" (UID: \"bddc534e-f4b2-474d-8ce9-f83aa6006015\") " pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.069955 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bddc534e-f4b2-474d-8ce9-f83aa6006015-service-ca\") pod \"console-655b6b84f6-zdwff\" (UID: \"bddc534e-f4b2-474d-8ce9-f83aa6006015\") " pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.069990 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bddc534e-f4b2-474d-8ce9-f83aa6006015-oauth-serving-cert\") pod \"console-655b6b84f6-zdwff\" (UID: \"bddc534e-f4b2-474d-8ce9-f83aa6006015\") " pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.070313 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bddc534e-f4b2-474d-8ce9-f83aa6006015-trusted-ca-bundle\") pod \"console-655b6b84f6-zdwff\" (UID: \"bddc534e-f4b2-474d-8ce9-f83aa6006015\") " pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.073968 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bddc534e-f4b2-474d-8ce9-f83aa6006015-console-oauth-config\") pod \"console-655b6b84f6-zdwff\" (UID: \"bddc534e-f4b2-474d-8ce9-f83aa6006015\") " pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.074195 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bddc534e-f4b2-474d-8ce9-f83aa6006015-console-serving-cert\") pod \"console-655b6b84f6-zdwff\" (UID: \"bddc534e-f4b2-474d-8ce9-f83aa6006015\") " pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.089325 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s7hc\" (UniqueName: \"kubernetes.io/projected/bddc534e-f4b2-474d-8ce9-f83aa6006015-kube-api-access-8s7hc\") pod \"console-655b6b84f6-zdwff\" (UID: \"bddc534e-f4b2-474d-8ce9-f83aa6006015\") " pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.143828 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-hpkxf"] Feb 02 13:12:46 crc kubenswrapper[4955]: W0202 13:12:46.147063 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34e639ba_ce03_41b0_a48f_04c959db2204.slice/crio-979e0570340e7ed1d0109ee88990fec762782e4c15eb0d602ddb1b920ba5f87d WatchSource:0}: Error finding container 979e0570340e7ed1d0109ee88990fec762782e4c15eb0d602ddb1b920ba5f87d: Status 404 returned error can't find the container with id 979e0570340e7ed1d0109ee88990fec762782e4c15eb0d602ddb1b920ba5f87d Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.187057 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.326847 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-5kxd7"] Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.624475 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-655b6b84f6-zdwff"] Feb 02 13:12:46 crc kubenswrapper[4955]: W0202 13:12:46.627113 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbddc534e_f4b2_474d_8ce9_f83aa6006015.slice/crio-6e5f81655c4a64bf93f91d16f875a99e7b8a706e9f0c8a0460795c3374e121d6 WatchSource:0}: Error finding container 6e5f81655c4a64bf93f91d16f875a99e7b8a706e9f0c8a0460795c3374e121d6: Status 404 returned error can't find the container with id 6e5f81655c4a64bf93f91d16f875a99e7b8a706e9f0c8a0460795c3374e121d6 Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.868951 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5kxd7" event={"ID":"861649ff-e626-4bd9-84ae-350085318d2e","Type":"ContainerStarted","Data":"fd822af53a22dd8877625da62e78c319b2f86f6ee16970951835153910825b8a"} Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.870048 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hpkxf" event={"ID":"34e639ba-ce03-41b0-a48f-04c959db2204","Type":"ContainerStarted","Data":"979e0570340e7ed1d0109ee88990fec762782e4c15eb0d602ddb1b920ba5f87d"} Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.871711 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-655b6b84f6-zdwff" event={"ID":"bddc534e-f4b2-474d-8ce9-f83aa6006015","Type":"ContainerStarted","Data":"104ce99885c98a24b1ab87d4d75900ebf849364179d2b0c769f21487bebc2ef7"} Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.871742 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-655b6b84f6-zdwff" event={"ID":"bddc534e-f4b2-474d-8ce9-f83aa6006015","Type":"ContainerStarted","Data":"6e5f81655c4a64bf93f91d16f875a99e7b8a706e9f0c8a0460795c3374e121d6"} Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.873002 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-gqqzg" event={"ID":"2cad4ce5-2270-4724-b134-9d96c52a68ab","Type":"ContainerStarted","Data":"4a35c2680d885d9f6c07fb77f6f327e3a46b23eb3297b15951516246d9364268"} Feb 02 13:12:46 crc kubenswrapper[4955]: I0202 13:12:46.874039 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ztws6" event={"ID":"af032e86-91c3-4253-b263-3aab67e04b81","Type":"ContainerStarted","Data":"5610db98e5e8045565b9e9cbbcad32cd073e2d71faca44e5ce70919732e1fea9"} Feb 02 13:12:47 crc kubenswrapper[4955]: I0202 13:12:47.910788 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-655b6b84f6-zdwff" podStartSLOduration=2.9107688019999998 podStartE2EDuration="2.910768802s" podCreationTimestamp="2026-02-02 13:12:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:12:47.905732424 +0000 UTC m=+618.818068884" watchObservedRunningTime="2026-02-02 13:12:47.910768802 +0000 UTC m=+618.823105262" Feb 02 13:12:52 crc kubenswrapper[4955]: I0202 13:12:52.916316 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-gqqzg" event={"ID":"2cad4ce5-2270-4724-b134-9d96c52a68ab","Type":"ContainerStarted","Data":"a9c1ea545669340b616162e2a5398b5e7ed9b8eed4adb3b3bf169113253aff15"} Feb 02 13:12:52 crc kubenswrapper[4955]: I0202 13:12:52.917450 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ztws6" event={"ID":"af032e86-91c3-4253-b263-3aab67e04b81","Type":"ContainerStarted","Data":"67e76c1ad648f75c11a953277efe9a0f26f0caf9b79fb36ee1a4e7931bc29ec6"} Feb 02 13:12:52 crc kubenswrapper[4955]: I0202 13:12:52.917605 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-ztws6" Feb 02 13:12:52 crc kubenswrapper[4955]: I0202 13:12:52.920072 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5kxd7" event={"ID":"861649ff-e626-4bd9-84ae-350085318d2e","Type":"ContainerStarted","Data":"4f4bb23ee1c52d107312f1fdc8d5fe828adb72dc3238a91e5c4451bd8c1634e9"} Feb 02 13:12:52 crc kubenswrapper[4955]: I0202 13:12:52.920208 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5kxd7" Feb 02 13:12:52 crc kubenswrapper[4955]: I0202 13:12:52.922134 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hpkxf" event={"ID":"34e639ba-ce03-41b0-a48f-04c959db2204","Type":"ContainerStarted","Data":"ae1c9afa370cfd4df2f8e7a3bafe56dfeff51e48e11f5fc3033ca7f63feeceef"} Feb 02 13:12:52 crc kubenswrapper[4955]: I0202 13:12:52.937010 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-ztws6" podStartSLOduration=2.001492766 podStartE2EDuration="7.936994075s" podCreationTimestamp="2026-02-02 13:12:45 +0000 UTC" firstStartedPulling="2026-02-02 13:12:45.900671108 +0000 UTC m=+616.813007568" lastFinishedPulling="2026-02-02 13:12:51.836172427 +0000 UTC m=+622.748508877" observedRunningTime="2026-02-02 13:12:52.931981048 +0000 UTC m=+623.844317498" watchObservedRunningTime="2026-02-02 13:12:52.936994075 +0000 UTC m=+623.849330525" Feb 02 13:12:52 crc kubenswrapper[4955]: I0202 13:12:52.948771 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hpkxf" podStartSLOduration=2.254501371 podStartE2EDuration="7.948744261s" podCreationTimestamp="2026-02-02 13:12:45 +0000 UTC" firstStartedPulling="2026-02-02 13:12:46.149194597 +0000 UTC m=+617.061531047" lastFinishedPulling="2026-02-02 13:12:51.843437477 +0000 UTC m=+622.755773937" observedRunningTime="2026-02-02 13:12:52.9465695 +0000 UTC m=+623.858905960" watchObservedRunningTime="2026-02-02 13:12:52.948744261 +0000 UTC m=+623.861080711" Feb 02 13:12:52 crc kubenswrapper[4955]: I0202 13:12:52.966309 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5kxd7" podStartSLOduration=2.463098733 podStartE2EDuration="7.966292803s" podCreationTimestamp="2026-02-02 13:12:45 +0000 UTC" firstStartedPulling="2026-02-02 13:12:46.334253407 +0000 UTC m=+617.246589857" lastFinishedPulling="2026-02-02 13:12:51.837447477 +0000 UTC m=+622.749783927" observedRunningTime="2026-02-02 13:12:52.960952687 +0000 UTC m=+623.873289147" watchObservedRunningTime="2026-02-02 13:12:52.966292803 +0000 UTC m=+623.878629243" Feb 02 13:12:56 crc kubenswrapper[4955]: I0202 13:12:56.188087 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:56 crc kubenswrapper[4955]: I0202 13:12:56.188432 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:56 crc kubenswrapper[4955]: I0202 13:12:56.192393 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:56 crc kubenswrapper[4955]: I0202 13:12:56.948413 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-gqqzg" event={"ID":"2cad4ce5-2270-4724-b134-9d96c52a68ab","Type":"ContainerStarted","Data":"fafdb80ec7567877f2ab7550f83bf9d2b8f6a6943df06e661c39d4941aa4244f"} Feb 02 13:12:56 crc kubenswrapper[4955]: I0202 13:12:56.953022 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-655b6b84f6-zdwff" Feb 02 13:12:56 crc kubenswrapper[4955]: I0202 13:12:56.969961 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-gqqzg" podStartSLOduration=2.073816843 podStartE2EDuration="11.969935823s" podCreationTimestamp="2026-02-02 13:12:45 +0000 UTC" firstStartedPulling="2026-02-02 13:12:46.027173895 +0000 UTC m=+616.939510345" lastFinishedPulling="2026-02-02 13:12:55.923292875 +0000 UTC m=+626.835629325" observedRunningTime="2026-02-02 13:12:56.965306384 +0000 UTC m=+627.877642844" watchObservedRunningTime="2026-02-02 13:12:56.969935823 +0000 UTC m=+627.882272283" Feb 02 13:12:57 crc kubenswrapper[4955]: I0202 13:12:57.031054 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6tx72"] Feb 02 13:13:00 crc kubenswrapper[4955]: I0202 13:13:00.903594 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-ztws6" Feb 02 13:13:05 crc kubenswrapper[4955]: I0202 13:13:05.845088 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5kxd7" Feb 02 13:13:17 crc kubenswrapper[4955]: I0202 13:13:17.494897 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl"] Feb 02 13:13:17 crc kubenswrapper[4955]: I0202 13:13:17.496756 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl" Feb 02 13:13:17 crc kubenswrapper[4955]: I0202 13:13:17.501616 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 13:13:17 crc kubenswrapper[4955]: I0202 13:13:17.502206 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl"] Feb 02 13:13:17 crc kubenswrapper[4955]: I0202 13:13:17.693013 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl\" (UID: \"be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl" Feb 02 13:13:17 crc kubenswrapper[4955]: I0202 13:13:17.693062 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl\" (UID: \"be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl" Feb 02 13:13:17 crc kubenswrapper[4955]: I0202 13:13:17.693106 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-758q6\" (UniqueName: \"kubernetes.io/projected/be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c-kube-api-access-758q6\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl\" (UID: \"be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl" Feb 02 13:13:17 crc kubenswrapper[4955]: I0202 13:13:17.794602 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl\" (UID: \"be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl" Feb 02 13:13:17 crc kubenswrapper[4955]: I0202 13:13:17.794812 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl\" (UID: \"be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl" Feb 02 13:13:17 crc kubenswrapper[4955]: I0202 13:13:17.794874 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-758q6\" (UniqueName: \"kubernetes.io/projected/be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c-kube-api-access-758q6\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl\" (UID: \"be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl" Feb 02 13:13:17 crc kubenswrapper[4955]: I0202 13:13:17.795074 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl\" (UID: \"be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl" Feb 02 13:13:17 crc kubenswrapper[4955]: I0202 13:13:17.795382 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl\" (UID: \"be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl" Feb 02 13:13:17 crc kubenswrapper[4955]: I0202 13:13:17.817677 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-758q6\" (UniqueName: \"kubernetes.io/projected/be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c-kube-api-access-758q6\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl\" (UID: \"be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl" Feb 02 13:13:17 crc kubenswrapper[4955]: I0202 13:13:17.821542 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl" Feb 02 13:13:18 crc kubenswrapper[4955]: I0202 13:13:18.277243 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl"] Feb 02 13:13:19 crc kubenswrapper[4955]: I0202 13:13:19.067584 4955 generic.go:334] "Generic (PLEG): container finished" podID="be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c" containerID="08fbd14a368df4fb4b5fcd1d984c1205afd5e43692835485e290050611512877" exitCode=0 Feb 02 13:13:19 crc kubenswrapper[4955]: I0202 13:13:19.067644 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl" event={"ID":"be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c","Type":"ContainerDied","Data":"08fbd14a368df4fb4b5fcd1d984c1205afd5e43692835485e290050611512877"} Feb 02 13:13:19 crc kubenswrapper[4955]: I0202 13:13:19.069750 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl" event={"ID":"be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c","Type":"ContainerStarted","Data":"d6adcb678396fe17cd18e93daa99dd91fcc89569c5f67b93f19a7d1375e3e896"} Feb 02 13:13:21 crc kubenswrapper[4955]: I0202 13:13:21.085931 4955 generic.go:334] "Generic (PLEG): container finished" podID="be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c" containerID="f6c9502401d6f70b3385e1e8e2f57564acca7a9d641600bbac7f9a52ff2ff3a2" exitCode=0 Feb 02 13:13:21 crc kubenswrapper[4955]: I0202 13:13:21.086010 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl" event={"ID":"be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c","Type":"ContainerDied","Data":"f6c9502401d6f70b3385e1e8e2f57564acca7a9d641600bbac7f9a52ff2ff3a2"} Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.079789 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-6tx72" podUID="7a8cbe38-2ffd-4741-9bbe-752d1a94f72a" containerName="console" containerID="cri-o://3fa83785871f30a7b32f4aa5027a51148625a42b7920dab22871252af3595f35" gracePeriod=15 Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.097313 4955 generic.go:334] "Generic (PLEG): container finished" podID="be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c" containerID="ad1f4001cad2b4f1423e064ee4763a3680309e112d4e164f1badba64c9e1a25f" exitCode=0 Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.097418 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl" event={"ID":"be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c","Type":"ContainerDied","Data":"ad1f4001cad2b4f1423e064ee4763a3680309e112d4e164f1badba64c9e1a25f"} Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.425129 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6tx72_7a8cbe38-2ffd-4741-9bbe-752d1a94f72a/console/0.log" Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.425202 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.555008 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-service-ca\") pod \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.555777 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-oauth-serving-cert\") pod \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.555827 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-console-oauth-config\") pod \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.555887 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-console-config\") pod \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.555938 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkvnb\" (UniqueName: \"kubernetes.io/projected/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-kube-api-access-wkvnb\") pod \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.556016 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-service-ca" (OuterVolumeSpecName: "service-ca") pod "7a8cbe38-2ffd-4741-9bbe-752d1a94f72a" (UID: "7a8cbe38-2ffd-4741-9bbe-752d1a94f72a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.556092 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-trusted-ca-bundle\") pod \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.556240 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "7a8cbe38-2ffd-4741-9bbe-752d1a94f72a" (UID: "7a8cbe38-2ffd-4741-9bbe-752d1a94f72a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.556342 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-console-serving-cert\") pod \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\" (UID: \"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a\") " Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.556923 4955 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.556943 4955 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.557304 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "7a8cbe38-2ffd-4741-9bbe-752d1a94f72a" (UID: "7a8cbe38-2ffd-4741-9bbe-752d1a94f72a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.557396 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-console-config" (OuterVolumeSpecName: "console-config") pod "7a8cbe38-2ffd-4741-9bbe-752d1a94f72a" (UID: "7a8cbe38-2ffd-4741-9bbe-752d1a94f72a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.562783 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-kube-api-access-wkvnb" (OuterVolumeSpecName: "kube-api-access-wkvnb") pod "7a8cbe38-2ffd-4741-9bbe-752d1a94f72a" (UID: "7a8cbe38-2ffd-4741-9bbe-752d1a94f72a"). InnerVolumeSpecName "kube-api-access-wkvnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.562947 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "7a8cbe38-2ffd-4741-9bbe-752d1a94f72a" (UID: "7a8cbe38-2ffd-4741-9bbe-752d1a94f72a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.563283 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "7a8cbe38-2ffd-4741-9bbe-752d1a94f72a" (UID: "7a8cbe38-2ffd-4741-9bbe-752d1a94f72a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.658324 4955 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.658362 4955 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.658372 4955 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.658380 4955 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:13:22 crc kubenswrapper[4955]: I0202 13:13:22.658391 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkvnb\" (UniqueName: \"kubernetes.io/projected/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a-kube-api-access-wkvnb\") on node \"crc\" DevicePath \"\"" Feb 02 13:13:23 crc kubenswrapper[4955]: I0202 13:13:23.116537 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6tx72_7a8cbe38-2ffd-4741-9bbe-752d1a94f72a/console/0.log" Feb 02 13:13:23 crc kubenswrapper[4955]: I0202 13:13:23.116616 4955 generic.go:334] "Generic (PLEG): container finished" podID="7a8cbe38-2ffd-4741-9bbe-752d1a94f72a" containerID="3fa83785871f30a7b32f4aa5027a51148625a42b7920dab22871252af3595f35" exitCode=2 Feb 02 13:13:23 crc kubenswrapper[4955]: I0202 13:13:23.116668 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6tx72" event={"ID":"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a","Type":"ContainerDied","Data":"3fa83785871f30a7b32f4aa5027a51148625a42b7920dab22871252af3595f35"} Feb 02 13:13:23 crc kubenswrapper[4955]: I0202 13:13:23.116714 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6tx72" event={"ID":"7a8cbe38-2ffd-4741-9bbe-752d1a94f72a","Type":"ContainerDied","Data":"445810b40c8cfc35280abcb94f53041758d7ea3e1d302eccc9ac9066e2dc1ec1"} Feb 02 13:13:23 crc kubenswrapper[4955]: I0202 13:13:23.116710 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6tx72" Feb 02 13:13:23 crc kubenswrapper[4955]: I0202 13:13:23.116746 4955 scope.go:117] "RemoveContainer" containerID="3fa83785871f30a7b32f4aa5027a51148625a42b7920dab22871252af3595f35" Feb 02 13:13:23 crc kubenswrapper[4955]: I0202 13:13:23.142301 4955 scope.go:117] "RemoveContainer" containerID="3fa83785871f30a7b32f4aa5027a51148625a42b7920dab22871252af3595f35" Feb 02 13:13:23 crc kubenswrapper[4955]: E0202 13:13:23.143040 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fa83785871f30a7b32f4aa5027a51148625a42b7920dab22871252af3595f35\": container with ID starting with 3fa83785871f30a7b32f4aa5027a51148625a42b7920dab22871252af3595f35 not found: ID does not exist" containerID="3fa83785871f30a7b32f4aa5027a51148625a42b7920dab22871252af3595f35" Feb 02 13:13:23 crc kubenswrapper[4955]: I0202 13:13:23.143132 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fa83785871f30a7b32f4aa5027a51148625a42b7920dab22871252af3595f35"} err="failed to get container status \"3fa83785871f30a7b32f4aa5027a51148625a42b7920dab22871252af3595f35\": rpc error: code = NotFound desc = could not find container \"3fa83785871f30a7b32f4aa5027a51148625a42b7920dab22871252af3595f35\": container with ID starting with 3fa83785871f30a7b32f4aa5027a51148625a42b7920dab22871252af3595f35 not found: ID does not exist" Feb 02 13:13:23 crc kubenswrapper[4955]: I0202 13:13:23.154103 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6tx72"] Feb 02 13:13:23 crc kubenswrapper[4955]: I0202 13:13:23.157922 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-6tx72"] Feb 02 13:13:23 crc kubenswrapper[4955]: I0202 13:13:23.326524 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl" Feb 02 13:13:23 crc kubenswrapper[4955]: I0202 13:13:23.484769 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c-bundle\") pod \"be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c\" (UID: \"be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c\") " Feb 02 13:13:23 crc kubenswrapper[4955]: I0202 13:13:23.484901 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-758q6\" (UniqueName: \"kubernetes.io/projected/be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c-kube-api-access-758q6\") pod \"be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c\" (UID: \"be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c\") " Feb 02 13:13:23 crc kubenswrapper[4955]: I0202 13:13:23.484932 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c-util\") pod \"be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c\" (UID: \"be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c\") " Feb 02 13:13:23 crc kubenswrapper[4955]: I0202 13:13:23.486073 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c-bundle" (OuterVolumeSpecName: "bundle") pod "be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c" (UID: "be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:13:23 crc kubenswrapper[4955]: I0202 13:13:23.492395 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c-kube-api-access-758q6" (OuterVolumeSpecName: "kube-api-access-758q6") pod "be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c" (UID: "be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c"). InnerVolumeSpecName "kube-api-access-758q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:13:23 crc kubenswrapper[4955]: I0202 13:13:23.499995 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c-util" (OuterVolumeSpecName: "util") pod "be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c" (UID: "be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:13:23 crc kubenswrapper[4955]: I0202 13:13:23.587303 4955 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:13:23 crc kubenswrapper[4955]: I0202 13:13:23.587357 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-758q6\" (UniqueName: \"kubernetes.io/projected/be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c-kube-api-access-758q6\") on node \"crc\" DevicePath \"\"" Feb 02 13:13:23 crc kubenswrapper[4955]: I0202 13:13:23.587376 4955 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c-util\") on node \"crc\" DevicePath \"\"" Feb 02 13:13:23 crc kubenswrapper[4955]: I0202 13:13:23.725175 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a8cbe38-2ffd-4741-9bbe-752d1a94f72a" path="/var/lib/kubelet/pods/7a8cbe38-2ffd-4741-9bbe-752d1a94f72a/volumes" Feb 02 13:13:24 crc kubenswrapper[4955]: I0202 13:13:24.124346 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl" event={"ID":"be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c","Type":"ContainerDied","Data":"d6adcb678396fe17cd18e93daa99dd91fcc89569c5f67b93f19a7d1375e3e896"} Feb 02 13:13:24 crc kubenswrapper[4955]: I0202 13:13:24.124746 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6adcb678396fe17cd18e93daa99dd91fcc89569c5f67b93f19a7d1375e3e896" Feb 02 13:13:24 crc kubenswrapper[4955]: I0202 13:13:24.124390 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.428758 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-67485f44d-qvtcv"] Feb 02 13:13:32 crc kubenswrapper[4955]: E0202 13:13:32.429584 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c" containerName="util" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.429602 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c" containerName="util" Feb 02 13:13:32 crc kubenswrapper[4955]: E0202 13:13:32.429617 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c" containerName="extract" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.429624 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c" containerName="extract" Feb 02 13:13:32 crc kubenswrapper[4955]: E0202 13:13:32.429638 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a8cbe38-2ffd-4741-9bbe-752d1a94f72a" containerName="console" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.429646 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a8cbe38-2ffd-4741-9bbe-752d1a94f72a" containerName="console" Feb 02 13:13:32 crc kubenswrapper[4955]: E0202 13:13:32.429662 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c" containerName="pull" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.429670 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c" containerName="pull" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.429792 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c" containerName="extract" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.429810 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a8cbe38-2ffd-4741-9bbe-752d1a94f72a" containerName="console" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.430344 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-67485f44d-qvtcv" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.432849 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.432853 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-bjh6w" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.433343 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.433342 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.435302 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.437571 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-67485f44d-qvtcv"] Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.508792 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cb55affd-be36-4e4e-8757-2523cb995f32-apiservice-cert\") pod \"metallb-operator-controller-manager-67485f44d-qvtcv\" (UID: \"cb55affd-be36-4e4e-8757-2523cb995f32\") " pod="metallb-system/metallb-operator-controller-manager-67485f44d-qvtcv" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.508857 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cb55affd-be36-4e4e-8757-2523cb995f32-webhook-cert\") pod \"metallb-operator-controller-manager-67485f44d-qvtcv\" (UID: \"cb55affd-be36-4e4e-8757-2523cb995f32\") " pod="metallb-system/metallb-operator-controller-manager-67485f44d-qvtcv" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.508996 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxsgf\" (UniqueName: \"kubernetes.io/projected/cb55affd-be36-4e4e-8757-2523cb995f32-kube-api-access-qxsgf\") pod \"metallb-operator-controller-manager-67485f44d-qvtcv\" (UID: \"cb55affd-be36-4e4e-8757-2523cb995f32\") " pod="metallb-system/metallb-operator-controller-manager-67485f44d-qvtcv" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.610113 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxsgf\" (UniqueName: \"kubernetes.io/projected/cb55affd-be36-4e4e-8757-2523cb995f32-kube-api-access-qxsgf\") pod \"metallb-operator-controller-manager-67485f44d-qvtcv\" (UID: \"cb55affd-be36-4e4e-8757-2523cb995f32\") " pod="metallb-system/metallb-operator-controller-manager-67485f44d-qvtcv" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.610203 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cb55affd-be36-4e4e-8757-2523cb995f32-apiservice-cert\") pod \"metallb-operator-controller-manager-67485f44d-qvtcv\" (UID: \"cb55affd-be36-4e4e-8757-2523cb995f32\") " pod="metallb-system/metallb-operator-controller-manager-67485f44d-qvtcv" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.610247 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cb55affd-be36-4e4e-8757-2523cb995f32-webhook-cert\") pod \"metallb-operator-controller-manager-67485f44d-qvtcv\" (UID: \"cb55affd-be36-4e4e-8757-2523cb995f32\") " pod="metallb-system/metallb-operator-controller-manager-67485f44d-qvtcv" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.631371 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cb55affd-be36-4e4e-8757-2523cb995f32-webhook-cert\") pod \"metallb-operator-controller-manager-67485f44d-qvtcv\" (UID: \"cb55affd-be36-4e4e-8757-2523cb995f32\") " pod="metallb-system/metallb-operator-controller-manager-67485f44d-qvtcv" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.633263 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cb55affd-be36-4e4e-8757-2523cb995f32-apiservice-cert\") pod \"metallb-operator-controller-manager-67485f44d-qvtcv\" (UID: \"cb55affd-be36-4e4e-8757-2523cb995f32\") " pod="metallb-system/metallb-operator-controller-manager-67485f44d-qvtcv" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.637206 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxsgf\" (UniqueName: \"kubernetes.io/projected/cb55affd-be36-4e4e-8757-2523cb995f32-kube-api-access-qxsgf\") pod \"metallb-operator-controller-manager-67485f44d-qvtcv\" (UID: \"cb55affd-be36-4e4e-8757-2523cb995f32\") " pod="metallb-system/metallb-operator-controller-manager-67485f44d-qvtcv" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.749083 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-67485f44d-qvtcv" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.756367 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-69d59c6b57-c75mt"] Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.757075 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-69d59c6b57-c75mt" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.759136 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.767415 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.767614 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-mlwhz" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.788784 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-69d59c6b57-c75mt"] Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.915104 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/486ac9ae-6679-4f9a-87cf-5fbba490e986-apiservice-cert\") pod \"metallb-operator-webhook-server-69d59c6b57-c75mt\" (UID: \"486ac9ae-6679-4f9a-87cf-5fbba490e986\") " pod="metallb-system/metallb-operator-webhook-server-69d59c6b57-c75mt" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.915156 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/486ac9ae-6679-4f9a-87cf-5fbba490e986-webhook-cert\") pod \"metallb-operator-webhook-server-69d59c6b57-c75mt\" (UID: \"486ac9ae-6679-4f9a-87cf-5fbba490e986\") " pod="metallb-system/metallb-operator-webhook-server-69d59c6b57-c75mt" Feb 02 13:13:32 crc kubenswrapper[4955]: I0202 13:13:32.915198 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnfcj\" (UniqueName: \"kubernetes.io/projected/486ac9ae-6679-4f9a-87cf-5fbba490e986-kube-api-access-wnfcj\") pod \"metallb-operator-webhook-server-69d59c6b57-c75mt\" (UID: \"486ac9ae-6679-4f9a-87cf-5fbba490e986\") " pod="metallb-system/metallb-operator-webhook-server-69d59c6b57-c75mt" Feb 02 13:13:33 crc kubenswrapper[4955]: I0202 13:13:33.016581 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/486ac9ae-6679-4f9a-87cf-5fbba490e986-apiservice-cert\") pod \"metallb-operator-webhook-server-69d59c6b57-c75mt\" (UID: \"486ac9ae-6679-4f9a-87cf-5fbba490e986\") " pod="metallb-system/metallb-operator-webhook-server-69d59c6b57-c75mt" Feb 02 13:13:33 crc kubenswrapper[4955]: I0202 13:13:33.016637 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/486ac9ae-6679-4f9a-87cf-5fbba490e986-webhook-cert\") pod \"metallb-operator-webhook-server-69d59c6b57-c75mt\" (UID: \"486ac9ae-6679-4f9a-87cf-5fbba490e986\") " pod="metallb-system/metallb-operator-webhook-server-69d59c6b57-c75mt" Feb 02 13:13:33 crc kubenswrapper[4955]: I0202 13:13:33.016682 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnfcj\" (UniqueName: \"kubernetes.io/projected/486ac9ae-6679-4f9a-87cf-5fbba490e986-kube-api-access-wnfcj\") pod \"metallb-operator-webhook-server-69d59c6b57-c75mt\" (UID: \"486ac9ae-6679-4f9a-87cf-5fbba490e986\") " pod="metallb-system/metallb-operator-webhook-server-69d59c6b57-c75mt" Feb 02 13:13:33 crc kubenswrapper[4955]: I0202 13:13:33.022296 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/486ac9ae-6679-4f9a-87cf-5fbba490e986-webhook-cert\") pod \"metallb-operator-webhook-server-69d59c6b57-c75mt\" (UID: \"486ac9ae-6679-4f9a-87cf-5fbba490e986\") " pod="metallb-system/metallb-operator-webhook-server-69d59c6b57-c75mt" Feb 02 13:13:33 crc kubenswrapper[4955]: I0202 13:13:33.023678 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/486ac9ae-6679-4f9a-87cf-5fbba490e986-apiservice-cert\") pod \"metallb-operator-webhook-server-69d59c6b57-c75mt\" (UID: \"486ac9ae-6679-4f9a-87cf-5fbba490e986\") " pod="metallb-system/metallb-operator-webhook-server-69d59c6b57-c75mt" Feb 02 13:13:33 crc kubenswrapper[4955]: I0202 13:13:33.037788 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnfcj\" (UniqueName: \"kubernetes.io/projected/486ac9ae-6679-4f9a-87cf-5fbba490e986-kube-api-access-wnfcj\") pod \"metallb-operator-webhook-server-69d59c6b57-c75mt\" (UID: \"486ac9ae-6679-4f9a-87cf-5fbba490e986\") " pod="metallb-system/metallb-operator-webhook-server-69d59c6b57-c75mt" Feb 02 13:13:33 crc kubenswrapper[4955]: I0202 13:13:33.038828 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-67485f44d-qvtcv"] Feb 02 13:13:33 crc kubenswrapper[4955]: W0202 13:13:33.043535 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb55affd_be36_4e4e_8757_2523cb995f32.slice/crio-cb5adfbebb0d8d74331737ea6944543328de89693c36c94b707ce4f8c65be079 WatchSource:0}: Error finding container cb5adfbebb0d8d74331737ea6944543328de89693c36c94b707ce4f8c65be079: Status 404 returned error can't find the container with id cb5adfbebb0d8d74331737ea6944543328de89693c36c94b707ce4f8c65be079 Feb 02 13:13:33 crc kubenswrapper[4955]: I0202 13:13:33.128736 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-69d59c6b57-c75mt" Feb 02 13:13:33 crc kubenswrapper[4955]: I0202 13:13:33.182052 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-67485f44d-qvtcv" event={"ID":"cb55affd-be36-4e4e-8757-2523cb995f32","Type":"ContainerStarted","Data":"cb5adfbebb0d8d74331737ea6944543328de89693c36c94b707ce4f8c65be079"} Feb 02 13:13:33 crc kubenswrapper[4955]: I0202 13:13:33.321542 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-69d59c6b57-c75mt"] Feb 02 13:13:33 crc kubenswrapper[4955]: W0202 13:13:33.330129 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod486ac9ae_6679_4f9a_87cf_5fbba490e986.slice/crio-364bbc54298c4289056577b3a43191260898e81c4423bd5cc0cd6df2bc3a76d8 WatchSource:0}: Error finding container 364bbc54298c4289056577b3a43191260898e81c4423bd5cc0cd6df2bc3a76d8: Status 404 returned error can't find the container with id 364bbc54298c4289056577b3a43191260898e81c4423bd5cc0cd6df2bc3a76d8 Feb 02 13:13:34 crc kubenswrapper[4955]: I0202 13:13:34.189495 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-69d59c6b57-c75mt" event={"ID":"486ac9ae-6679-4f9a-87cf-5fbba490e986","Type":"ContainerStarted","Data":"364bbc54298c4289056577b3a43191260898e81c4423bd5cc0cd6df2bc3a76d8"} Feb 02 13:13:36 crc kubenswrapper[4955]: I0202 13:13:36.199232 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-67485f44d-qvtcv" event={"ID":"cb55affd-be36-4e4e-8757-2523cb995f32","Type":"ContainerStarted","Data":"2845a4803037149c6ead6b4ffde8111c8c55091080f739ca41a42cc2609b2f6f"} Feb 02 13:13:36 crc kubenswrapper[4955]: I0202 13:13:36.199778 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-67485f44d-qvtcv" Feb 02 13:13:36 crc kubenswrapper[4955]: I0202 13:13:36.218589 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-67485f44d-qvtcv" podStartSLOduration=1.826287634 podStartE2EDuration="4.218572693s" podCreationTimestamp="2026-02-02 13:13:32 +0000 UTC" firstStartedPulling="2026-02-02 13:13:33.049287659 +0000 UTC m=+663.961624109" lastFinishedPulling="2026-02-02 13:13:35.441572718 +0000 UTC m=+666.353909168" observedRunningTime="2026-02-02 13:13:36.217419936 +0000 UTC m=+667.129756396" watchObservedRunningTime="2026-02-02 13:13:36.218572693 +0000 UTC m=+667.130909143" Feb 02 13:13:38 crc kubenswrapper[4955]: I0202 13:13:38.211795 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-69d59c6b57-c75mt" event={"ID":"486ac9ae-6679-4f9a-87cf-5fbba490e986","Type":"ContainerStarted","Data":"446df0c3f29ef641d7c8ea4c11cc10eb22970b01f98f6f70b5d3e9bee4f98dfb"} Feb 02 13:13:38 crc kubenswrapper[4955]: I0202 13:13:38.212069 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-69d59c6b57-c75mt" Feb 02 13:13:38 crc kubenswrapper[4955]: I0202 13:13:38.238851 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-69d59c6b57-c75mt" podStartSLOduration=2.129640002 podStartE2EDuration="6.238827874s" podCreationTimestamp="2026-02-02 13:13:32 +0000 UTC" firstStartedPulling="2026-02-02 13:13:33.332967664 +0000 UTC m=+664.245304114" lastFinishedPulling="2026-02-02 13:13:37.442155536 +0000 UTC m=+668.354491986" observedRunningTime="2026-02-02 13:13:38.234718476 +0000 UTC m=+669.147054926" watchObservedRunningTime="2026-02-02 13:13:38.238827874 +0000 UTC m=+669.151164324" Feb 02 13:13:53 crc kubenswrapper[4955]: I0202 13:13:53.132316 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-69d59c6b57-c75mt" Feb 02 13:14:03 crc kubenswrapper[4955]: I0202 13:14:03.017232 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:14:03 crc kubenswrapper[4955]: I0202 13:14:03.017770 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:14:12 crc kubenswrapper[4955]: I0202 13:14:12.751406 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-67485f44d-qvtcv" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.420866 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-fsws2"] Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.421862 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-fsws2" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.423644 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.423745 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-lvg9l" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.432017 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-mxh79"] Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.434283 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.435571 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.436124 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.437268 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-fsws2"] Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.512715 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-qhbgd"] Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.513608 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qhbgd" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.521364 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-zq68v"] Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.522508 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-zq68v" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.523457 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.523607 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-wd4cw" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.523739 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.523907 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.524079 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.538390 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-zq68v"] Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.571535 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40d0d0f1-33ab-4255-8608-7dcfdfde094e-metrics-certs\") pod \"frr-k8s-mxh79\" (UID: \"40d0d0f1-33ab-4255-8608-7dcfdfde094e\") " pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.572501 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/40d0d0f1-33ab-4255-8608-7dcfdfde094e-frr-sockets\") pod \"frr-k8s-mxh79\" (UID: \"40d0d0f1-33ab-4255-8608-7dcfdfde094e\") " pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.572600 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cbsk\" (UniqueName: \"kubernetes.io/projected/0ce35428-6892-45fa-9455-6fc86652c803-kube-api-access-9cbsk\") pod \"frr-k8s-webhook-server-7df86c4f6c-fsws2\" (UID: \"0ce35428-6892-45fa-9455-6fc86652c803\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-fsws2" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.572772 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8wlc\" (UniqueName: \"kubernetes.io/projected/40d0d0f1-33ab-4255-8608-7dcfdfde094e-kube-api-access-k8wlc\") pod \"frr-k8s-mxh79\" (UID: \"40d0d0f1-33ab-4255-8608-7dcfdfde094e\") " pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.572813 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/40d0d0f1-33ab-4255-8608-7dcfdfde094e-metrics\") pod \"frr-k8s-mxh79\" (UID: \"40d0d0f1-33ab-4255-8608-7dcfdfde094e\") " pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.572839 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/40d0d0f1-33ab-4255-8608-7dcfdfde094e-reloader\") pod \"frr-k8s-mxh79\" (UID: \"40d0d0f1-33ab-4255-8608-7dcfdfde094e\") " pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.572898 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/40d0d0f1-33ab-4255-8608-7dcfdfde094e-frr-startup\") pod \"frr-k8s-mxh79\" (UID: \"40d0d0f1-33ab-4255-8608-7dcfdfde094e\") " pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.572930 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/40d0d0f1-33ab-4255-8608-7dcfdfde094e-frr-conf\") pod \"frr-k8s-mxh79\" (UID: \"40d0d0f1-33ab-4255-8608-7dcfdfde094e\") " pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.572947 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ce35428-6892-45fa-9455-6fc86652c803-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-fsws2\" (UID: \"0ce35428-6892-45fa-9455-6fc86652c803\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-fsws2" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.674209 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np5dt\" (UniqueName: \"kubernetes.io/projected/a6d58a99-7496-48ea-9a41-dec05d17a5be-kube-api-access-np5dt\") pod \"speaker-qhbgd\" (UID: \"a6d58a99-7496-48ea-9a41-dec05d17a5be\") " pod="metallb-system/speaker-qhbgd" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.674293 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8wlc\" (UniqueName: \"kubernetes.io/projected/40d0d0f1-33ab-4255-8608-7dcfdfde094e-kube-api-access-k8wlc\") pod \"frr-k8s-mxh79\" (UID: \"40d0d0f1-33ab-4255-8608-7dcfdfde094e\") " pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.674318 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/40d0d0f1-33ab-4255-8608-7dcfdfde094e-metrics\") pod \"frr-k8s-mxh79\" (UID: \"40d0d0f1-33ab-4255-8608-7dcfdfde094e\") " pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.674358 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/40d0d0f1-33ab-4255-8608-7dcfdfde094e-reloader\") pod \"frr-k8s-mxh79\" (UID: \"40d0d0f1-33ab-4255-8608-7dcfdfde094e\") " pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.674383 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c06f0638-2793-4677-bba3-ef54f0b70498-cert\") pod \"controller-6968d8fdc4-zq68v\" (UID: \"c06f0638-2793-4677-bba3-ef54f0b70498\") " pod="metallb-system/controller-6968d8fdc4-zq68v" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.674859 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/40d0d0f1-33ab-4255-8608-7dcfdfde094e-metrics\") pod \"frr-k8s-mxh79\" (UID: \"40d0d0f1-33ab-4255-8608-7dcfdfde094e\") " pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.674953 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a6d58a99-7496-48ea-9a41-dec05d17a5be-memberlist\") pod \"speaker-qhbgd\" (UID: \"a6d58a99-7496-48ea-9a41-dec05d17a5be\") " pod="metallb-system/speaker-qhbgd" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.675034 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/40d0d0f1-33ab-4255-8608-7dcfdfde094e-frr-startup\") pod \"frr-k8s-mxh79\" (UID: \"40d0d0f1-33ab-4255-8608-7dcfdfde094e\") " pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.675090 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a6d58a99-7496-48ea-9a41-dec05d17a5be-metallb-excludel2\") pod \"speaker-qhbgd\" (UID: \"a6d58a99-7496-48ea-9a41-dec05d17a5be\") " pod="metallb-system/speaker-qhbgd" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.675120 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6d58a99-7496-48ea-9a41-dec05d17a5be-metrics-certs\") pod \"speaker-qhbgd\" (UID: \"a6d58a99-7496-48ea-9a41-dec05d17a5be\") " pod="metallb-system/speaker-qhbgd" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.675167 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/40d0d0f1-33ab-4255-8608-7dcfdfde094e-reloader\") pod \"frr-k8s-mxh79\" (UID: \"40d0d0f1-33ab-4255-8608-7dcfdfde094e\") " pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.675181 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75pmg\" (UniqueName: \"kubernetes.io/projected/c06f0638-2793-4677-bba3-ef54f0b70498-kube-api-access-75pmg\") pod \"controller-6968d8fdc4-zq68v\" (UID: \"c06f0638-2793-4677-bba3-ef54f0b70498\") " pod="metallb-system/controller-6968d8fdc4-zq68v" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.675256 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/40d0d0f1-33ab-4255-8608-7dcfdfde094e-frr-conf\") pod \"frr-k8s-mxh79\" (UID: \"40d0d0f1-33ab-4255-8608-7dcfdfde094e\") " pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.675293 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ce35428-6892-45fa-9455-6fc86652c803-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-fsws2\" (UID: \"0ce35428-6892-45fa-9455-6fc86652c803\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-fsws2" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.675335 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40d0d0f1-33ab-4255-8608-7dcfdfde094e-metrics-certs\") pod \"frr-k8s-mxh79\" (UID: \"40d0d0f1-33ab-4255-8608-7dcfdfde094e\") " pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.675368 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/40d0d0f1-33ab-4255-8608-7dcfdfde094e-frr-sockets\") pod \"frr-k8s-mxh79\" (UID: \"40d0d0f1-33ab-4255-8608-7dcfdfde094e\") " pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.675428 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06f0638-2793-4677-bba3-ef54f0b70498-metrics-certs\") pod \"controller-6968d8fdc4-zq68v\" (UID: \"c06f0638-2793-4677-bba3-ef54f0b70498\") " pod="metallb-system/controller-6968d8fdc4-zq68v" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.675476 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cbsk\" (UniqueName: \"kubernetes.io/projected/0ce35428-6892-45fa-9455-6fc86652c803-kube-api-access-9cbsk\") pod \"frr-k8s-webhook-server-7df86c4f6c-fsws2\" (UID: \"0ce35428-6892-45fa-9455-6fc86652c803\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-fsws2" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.675525 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/40d0d0f1-33ab-4255-8608-7dcfdfde094e-frr-conf\") pod \"frr-k8s-mxh79\" (UID: \"40d0d0f1-33ab-4255-8608-7dcfdfde094e\") " pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.675854 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/40d0d0f1-33ab-4255-8608-7dcfdfde094e-frr-sockets\") pod \"frr-k8s-mxh79\" (UID: \"40d0d0f1-33ab-4255-8608-7dcfdfde094e\") " pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.676489 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/40d0d0f1-33ab-4255-8608-7dcfdfde094e-frr-startup\") pod \"frr-k8s-mxh79\" (UID: \"40d0d0f1-33ab-4255-8608-7dcfdfde094e\") " pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.680323 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40d0d0f1-33ab-4255-8608-7dcfdfde094e-metrics-certs\") pod \"frr-k8s-mxh79\" (UID: \"40d0d0f1-33ab-4255-8608-7dcfdfde094e\") " pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.689060 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ce35428-6892-45fa-9455-6fc86652c803-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-fsws2\" (UID: \"0ce35428-6892-45fa-9455-6fc86652c803\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-fsws2" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.691920 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8wlc\" (UniqueName: \"kubernetes.io/projected/40d0d0f1-33ab-4255-8608-7dcfdfde094e-kube-api-access-k8wlc\") pod \"frr-k8s-mxh79\" (UID: \"40d0d0f1-33ab-4255-8608-7dcfdfde094e\") " pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.697379 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cbsk\" (UniqueName: \"kubernetes.io/projected/0ce35428-6892-45fa-9455-6fc86652c803-kube-api-access-9cbsk\") pod \"frr-k8s-webhook-server-7df86c4f6c-fsws2\" (UID: \"0ce35428-6892-45fa-9455-6fc86652c803\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-fsws2" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.741191 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-fsws2" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.748294 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.777174 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06f0638-2793-4677-bba3-ef54f0b70498-metrics-certs\") pod \"controller-6968d8fdc4-zq68v\" (UID: \"c06f0638-2793-4677-bba3-ef54f0b70498\") " pod="metallb-system/controller-6968d8fdc4-zq68v" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.777264 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np5dt\" (UniqueName: \"kubernetes.io/projected/a6d58a99-7496-48ea-9a41-dec05d17a5be-kube-api-access-np5dt\") pod \"speaker-qhbgd\" (UID: \"a6d58a99-7496-48ea-9a41-dec05d17a5be\") " pod="metallb-system/speaker-qhbgd" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.777313 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c06f0638-2793-4677-bba3-ef54f0b70498-cert\") pod \"controller-6968d8fdc4-zq68v\" (UID: \"c06f0638-2793-4677-bba3-ef54f0b70498\") " pod="metallb-system/controller-6968d8fdc4-zq68v" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.777332 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a6d58a99-7496-48ea-9a41-dec05d17a5be-memberlist\") pod \"speaker-qhbgd\" (UID: \"a6d58a99-7496-48ea-9a41-dec05d17a5be\") " pod="metallb-system/speaker-qhbgd" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.777382 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a6d58a99-7496-48ea-9a41-dec05d17a5be-metallb-excludel2\") pod \"speaker-qhbgd\" (UID: \"a6d58a99-7496-48ea-9a41-dec05d17a5be\") " pod="metallb-system/speaker-qhbgd" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.777407 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6d58a99-7496-48ea-9a41-dec05d17a5be-metrics-certs\") pod \"speaker-qhbgd\" (UID: \"a6d58a99-7496-48ea-9a41-dec05d17a5be\") " pod="metallb-system/speaker-qhbgd" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.777456 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75pmg\" (UniqueName: \"kubernetes.io/projected/c06f0638-2793-4677-bba3-ef54f0b70498-kube-api-access-75pmg\") pod \"controller-6968d8fdc4-zq68v\" (UID: \"c06f0638-2793-4677-bba3-ef54f0b70498\") " pod="metallb-system/controller-6968d8fdc4-zq68v" Feb 02 13:14:13 crc kubenswrapper[4955]: E0202 13:14:13.777979 4955 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 02 13:14:13 crc kubenswrapper[4955]: E0202 13:14:13.778021 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c06f0638-2793-4677-bba3-ef54f0b70498-metrics-certs podName:c06f0638-2793-4677-bba3-ef54f0b70498 nodeName:}" failed. No retries permitted until 2026-02-02 13:14:14.278007897 +0000 UTC m=+705.190344347 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c06f0638-2793-4677-bba3-ef54f0b70498-metrics-certs") pod "controller-6968d8fdc4-zq68v" (UID: "c06f0638-2793-4677-bba3-ef54f0b70498") : secret "controller-certs-secret" not found Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.781898 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a6d58a99-7496-48ea-9a41-dec05d17a5be-metallb-excludel2\") pod \"speaker-qhbgd\" (UID: \"a6d58a99-7496-48ea-9a41-dec05d17a5be\") " pod="metallb-system/speaker-qhbgd" Feb 02 13:14:13 crc kubenswrapper[4955]: E0202 13:14:13.782098 4955 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 13:14:13 crc kubenswrapper[4955]: E0202 13:14:13.786707 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6d58a99-7496-48ea-9a41-dec05d17a5be-memberlist podName:a6d58a99-7496-48ea-9a41-dec05d17a5be nodeName:}" failed. No retries permitted until 2026-02-02 13:14:14.286678476 +0000 UTC m=+705.199015016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a6d58a99-7496-48ea-9a41-dec05d17a5be-memberlist") pod "speaker-qhbgd" (UID: "a6d58a99-7496-48ea-9a41-dec05d17a5be") : secret "metallb-memberlist" not found Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.805132 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c06f0638-2793-4677-bba3-ef54f0b70498-cert\") pod \"controller-6968d8fdc4-zq68v\" (UID: \"c06f0638-2793-4677-bba3-ef54f0b70498\") " pod="metallb-system/controller-6968d8fdc4-zq68v" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.813507 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6d58a99-7496-48ea-9a41-dec05d17a5be-metrics-certs\") pod \"speaker-qhbgd\" (UID: \"a6d58a99-7496-48ea-9a41-dec05d17a5be\") " pod="metallb-system/speaker-qhbgd" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.823330 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np5dt\" (UniqueName: \"kubernetes.io/projected/a6d58a99-7496-48ea-9a41-dec05d17a5be-kube-api-access-np5dt\") pod \"speaker-qhbgd\" (UID: \"a6d58a99-7496-48ea-9a41-dec05d17a5be\") " pod="metallb-system/speaker-qhbgd" Feb 02 13:14:13 crc kubenswrapper[4955]: I0202 13:14:13.829359 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75pmg\" (UniqueName: \"kubernetes.io/projected/c06f0638-2793-4677-bba3-ef54f0b70498-kube-api-access-75pmg\") pod \"controller-6968d8fdc4-zq68v\" (UID: \"c06f0638-2793-4677-bba3-ef54f0b70498\") " pod="metallb-system/controller-6968d8fdc4-zq68v" Feb 02 13:14:14 crc kubenswrapper[4955]: I0202 13:14:14.298931 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06f0638-2793-4677-bba3-ef54f0b70498-metrics-certs\") pod \"controller-6968d8fdc4-zq68v\" (UID: \"c06f0638-2793-4677-bba3-ef54f0b70498\") " pod="metallb-system/controller-6968d8fdc4-zq68v" Feb 02 13:14:14 crc kubenswrapper[4955]: I0202 13:14:14.300113 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a6d58a99-7496-48ea-9a41-dec05d17a5be-memberlist\") pod \"speaker-qhbgd\" (UID: \"a6d58a99-7496-48ea-9a41-dec05d17a5be\") " pod="metallb-system/speaker-qhbgd" Feb 02 13:14:14 crc kubenswrapper[4955]: E0202 13:14:14.300374 4955 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 13:14:14 crc kubenswrapper[4955]: E0202 13:14:14.300520 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6d58a99-7496-48ea-9a41-dec05d17a5be-memberlist podName:a6d58a99-7496-48ea-9a41-dec05d17a5be nodeName:}" failed. No retries permitted until 2026-02-02 13:14:15.300488259 +0000 UTC m=+706.212824709 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a6d58a99-7496-48ea-9a41-dec05d17a5be-memberlist") pod "speaker-qhbgd" (UID: "a6d58a99-7496-48ea-9a41-dec05d17a5be") : secret "metallb-memberlist" not found Feb 02 13:14:14 crc kubenswrapper[4955]: I0202 13:14:14.304971 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06f0638-2793-4677-bba3-ef54f0b70498-metrics-certs\") pod \"controller-6968d8fdc4-zq68v\" (UID: \"c06f0638-2793-4677-bba3-ef54f0b70498\") " pod="metallb-system/controller-6968d8fdc4-zq68v" Feb 02 13:14:14 crc kubenswrapper[4955]: I0202 13:14:14.398069 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxh79" event={"ID":"40d0d0f1-33ab-4255-8608-7dcfdfde094e","Type":"ContainerStarted","Data":"2eff3952eea4c44da60ecc9c918dadf1b4a4c2ecda42437b2eac2dc2252ae246"} Feb 02 13:14:14 crc kubenswrapper[4955]: I0202 13:14:14.440324 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-zq68v" Feb 02 13:14:14 crc kubenswrapper[4955]: I0202 13:14:14.621089 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-fsws2"] Feb 02 13:14:14 crc kubenswrapper[4955]: W0202 13:14:14.628009 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ce35428_6892_45fa_9455_6fc86652c803.slice/crio-97c1d0f8afc864e6ce12d9629fda3cc85906e27cd5bfd8b103906b54096e71bd WatchSource:0}: Error finding container 97c1d0f8afc864e6ce12d9629fda3cc85906e27cd5bfd8b103906b54096e71bd: Status 404 returned error can't find the container with id 97c1d0f8afc864e6ce12d9629fda3cc85906e27cd5bfd8b103906b54096e71bd Feb 02 13:14:14 crc kubenswrapper[4955]: I0202 13:14:14.669083 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-zq68v"] Feb 02 13:14:14 crc kubenswrapper[4955]: W0202 13:14:14.672690 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc06f0638_2793_4677_bba3_ef54f0b70498.slice/crio-b7913aaf2f0e3439cde3a18bcaeb0638cb2443f090413f32b04adfa0343e28f9 WatchSource:0}: Error finding container b7913aaf2f0e3439cde3a18bcaeb0638cb2443f090413f32b04adfa0343e28f9: Status 404 returned error can't find the container with id b7913aaf2f0e3439cde3a18bcaeb0638cb2443f090413f32b04adfa0343e28f9 Feb 02 13:14:15 crc kubenswrapper[4955]: I0202 13:14:15.312541 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a6d58a99-7496-48ea-9a41-dec05d17a5be-memberlist\") pod \"speaker-qhbgd\" (UID: \"a6d58a99-7496-48ea-9a41-dec05d17a5be\") " pod="metallb-system/speaker-qhbgd" Feb 02 13:14:15 crc kubenswrapper[4955]: I0202 13:14:15.327461 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a6d58a99-7496-48ea-9a41-dec05d17a5be-memberlist\") pod \"speaker-qhbgd\" (UID: \"a6d58a99-7496-48ea-9a41-dec05d17a5be\") " pod="metallb-system/speaker-qhbgd" Feb 02 13:14:15 crc kubenswrapper[4955]: I0202 13:14:15.334311 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qhbgd" Feb 02 13:14:15 crc kubenswrapper[4955]: I0202 13:14:15.406262 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qhbgd" event={"ID":"a6d58a99-7496-48ea-9a41-dec05d17a5be","Type":"ContainerStarted","Data":"58cf0b7d27df3e5797cd37f2dfa35d7a3776e396b763bdc36cc6947a81e75e1a"} Feb 02 13:14:15 crc kubenswrapper[4955]: I0202 13:14:15.409001 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-zq68v" event={"ID":"c06f0638-2793-4677-bba3-ef54f0b70498","Type":"ContainerStarted","Data":"625e49ac1104e4fcab17b904e5760bee39370493d90de4500e65ad3e06fe809b"} Feb 02 13:14:15 crc kubenswrapper[4955]: I0202 13:14:15.409044 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-zq68v" event={"ID":"c06f0638-2793-4677-bba3-ef54f0b70498","Type":"ContainerStarted","Data":"1c50877a057baf4bcbda4ab84a28c2b0da92417b4705cd3c7f3ee9cc2c86b322"} Feb 02 13:14:15 crc kubenswrapper[4955]: I0202 13:14:15.409057 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-zq68v" event={"ID":"c06f0638-2793-4677-bba3-ef54f0b70498","Type":"ContainerStarted","Data":"b7913aaf2f0e3439cde3a18bcaeb0638cb2443f090413f32b04adfa0343e28f9"} Feb 02 13:14:15 crc kubenswrapper[4955]: I0202 13:14:15.409185 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-zq68v" Feb 02 13:14:15 crc kubenswrapper[4955]: I0202 13:14:15.411769 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-fsws2" event={"ID":"0ce35428-6892-45fa-9455-6fc86652c803","Type":"ContainerStarted","Data":"97c1d0f8afc864e6ce12d9629fda3cc85906e27cd5bfd8b103906b54096e71bd"} Feb 02 13:14:15 crc kubenswrapper[4955]: I0202 13:14:15.430968 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-zq68v" podStartSLOduration=2.430946096 podStartE2EDuration="2.430946096s" podCreationTimestamp="2026-02-02 13:14:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:14:15.427341389 +0000 UTC m=+706.339677839" watchObservedRunningTime="2026-02-02 13:14:15.430946096 +0000 UTC m=+706.343282556" Feb 02 13:14:16 crc kubenswrapper[4955]: I0202 13:14:16.429117 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qhbgd" event={"ID":"a6d58a99-7496-48ea-9a41-dec05d17a5be","Type":"ContainerStarted","Data":"746252ec7fbe7655626dcbf620f24e7be6296ace241f5668fdcd4f202c1b3daa"} Feb 02 13:14:16 crc kubenswrapper[4955]: I0202 13:14:16.429478 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-qhbgd" Feb 02 13:14:16 crc kubenswrapper[4955]: I0202 13:14:16.429492 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qhbgd" event={"ID":"a6d58a99-7496-48ea-9a41-dec05d17a5be","Type":"ContainerStarted","Data":"0535d9240c72f7a4f73963bb331b417376bb515d8bc5991e61720a5d5e1aab1c"} Feb 02 13:14:16 crc kubenswrapper[4955]: I0202 13:14:16.454226 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-qhbgd" podStartSLOduration=3.454194966 podStartE2EDuration="3.454194966s" podCreationTimestamp="2026-02-02 13:14:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:14:16.453243534 +0000 UTC m=+707.365579984" watchObservedRunningTime="2026-02-02 13:14:16.454194966 +0000 UTC m=+707.366531416" Feb 02 13:14:21 crc kubenswrapper[4955]: I0202 13:14:21.462732 4955 generic.go:334] "Generic (PLEG): container finished" podID="40d0d0f1-33ab-4255-8608-7dcfdfde094e" containerID="ffa992b680da95c48bd14e8fff7433821c2bca9c28208b9f3cc47ce80361124c" exitCode=0 Feb 02 13:14:21 crc kubenswrapper[4955]: I0202 13:14:21.462795 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxh79" event={"ID":"40d0d0f1-33ab-4255-8608-7dcfdfde094e","Type":"ContainerDied","Data":"ffa992b680da95c48bd14e8fff7433821c2bca9c28208b9f3cc47ce80361124c"} Feb 02 13:14:21 crc kubenswrapper[4955]: I0202 13:14:21.467195 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-fsws2" event={"ID":"0ce35428-6892-45fa-9455-6fc86652c803","Type":"ContainerStarted","Data":"0373a80ed4a06c5be93bfd07ace597bb503a1d5c84738b01fa23fb7d3a143a49"} Feb 02 13:14:21 crc kubenswrapper[4955]: I0202 13:14:21.467395 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-fsws2" Feb 02 13:14:21 crc kubenswrapper[4955]: I0202 13:14:21.509896 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-fsws2" podStartSLOduration=1.980947595 podStartE2EDuration="8.509870986s" podCreationTimestamp="2026-02-02 13:14:13 +0000 UTC" firstStartedPulling="2026-02-02 13:14:14.631243954 +0000 UTC m=+705.543580404" lastFinishedPulling="2026-02-02 13:14:21.160167335 +0000 UTC m=+712.072503795" observedRunningTime="2026-02-02 13:14:21.506409974 +0000 UTC m=+712.418746434" watchObservedRunningTime="2026-02-02 13:14:21.509870986 +0000 UTC m=+712.422207436" Feb 02 13:14:22 crc kubenswrapper[4955]: I0202 13:14:22.486987 4955 generic.go:334] "Generic (PLEG): container finished" podID="40d0d0f1-33ab-4255-8608-7dcfdfde094e" containerID="649ae4c663c3b6b744bf2a9ad463403afa57a77323e877baa17b4d0e6e9eee42" exitCode=0 Feb 02 13:14:22 crc kubenswrapper[4955]: I0202 13:14:22.487120 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxh79" event={"ID":"40d0d0f1-33ab-4255-8608-7dcfdfde094e","Type":"ContainerDied","Data":"649ae4c663c3b6b744bf2a9ad463403afa57a77323e877baa17b4d0e6e9eee42"} Feb 02 13:14:23 crc kubenswrapper[4955]: I0202 13:14:23.496405 4955 generic.go:334] "Generic (PLEG): container finished" podID="40d0d0f1-33ab-4255-8608-7dcfdfde094e" containerID="ca16f38f899f02573a7da58513c9be9e4ef34e6ba7d613fb1ff4d07a20ca11c8" exitCode=0 Feb 02 13:14:23 crc kubenswrapper[4955]: I0202 13:14:23.496508 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxh79" event={"ID":"40d0d0f1-33ab-4255-8608-7dcfdfde094e","Type":"ContainerDied","Data":"ca16f38f899f02573a7da58513c9be9e4ef34e6ba7d613fb1ff4d07a20ca11c8"} Feb 02 13:14:24 crc kubenswrapper[4955]: I0202 13:14:24.447127 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-zq68v" Feb 02 13:14:24 crc kubenswrapper[4955]: I0202 13:14:24.507023 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxh79" event={"ID":"40d0d0f1-33ab-4255-8608-7dcfdfde094e","Type":"ContainerStarted","Data":"db6c6881b5daaa52a392fd3c671b1ff842a7b9456a881c76c547057b70e76042"} Feb 02 13:14:24 crc kubenswrapper[4955]: I0202 13:14:24.507394 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxh79" event={"ID":"40d0d0f1-33ab-4255-8608-7dcfdfde094e","Type":"ContainerStarted","Data":"69e4ce3ab7ddc0e2c4f090cfcedc8be604f99a8d459e2cceb28ea2f027ffdb2f"} Feb 02 13:14:24 crc kubenswrapper[4955]: I0202 13:14:24.507414 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxh79" event={"ID":"40d0d0f1-33ab-4255-8608-7dcfdfde094e","Type":"ContainerStarted","Data":"ba6f7b8f0cf0ad447788d01ff7ce80a7533cb14b87c8d388e7e59d6cf7e97e6b"} Feb 02 13:14:24 crc kubenswrapper[4955]: I0202 13:14:24.507428 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxh79" event={"ID":"40d0d0f1-33ab-4255-8608-7dcfdfde094e","Type":"ContainerStarted","Data":"8244ca0030911918155d61b66b0a1e3c396e29329692b9ad1ac0b6cbcb29a597"} Feb 02 13:14:24 crc kubenswrapper[4955]: I0202 13:14:24.507440 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxh79" event={"ID":"40d0d0f1-33ab-4255-8608-7dcfdfde094e","Type":"ContainerStarted","Data":"d6c6a1f2cf0c78a56b221d6d6b3c3d3a0bde2e0ac042e5b00d8a718d35fbcde4"} Feb 02 13:14:25 crc kubenswrapper[4955]: I0202 13:14:25.338494 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-qhbgd" Feb 02 13:14:25 crc kubenswrapper[4955]: I0202 13:14:25.517861 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxh79" event={"ID":"40d0d0f1-33ab-4255-8608-7dcfdfde094e","Type":"ContainerStarted","Data":"84eba002863ae74a2af26f585af806056d4f946183cef8c19453088f9e0baede"} Feb 02 13:14:25 crc kubenswrapper[4955]: I0202 13:14:25.518040 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:25 crc kubenswrapper[4955]: I0202 13:14:25.543264 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-mxh79" podStartSLOduration=5.360572033 podStartE2EDuration="12.54324622s" podCreationTimestamp="2026-02-02 13:14:13 +0000 UTC" firstStartedPulling="2026-02-02 13:14:13.946091795 +0000 UTC m=+704.858428245" lastFinishedPulling="2026-02-02 13:14:21.128765982 +0000 UTC m=+712.041102432" observedRunningTime="2026-02-02 13:14:25.538994168 +0000 UTC m=+716.451330618" watchObservedRunningTime="2026-02-02 13:14:25.54324622 +0000 UTC m=+716.455582670" Feb 02 13:14:27 crc kubenswrapper[4955]: I0202 13:14:27.932967 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-d54dz"] Feb 02 13:14:27 crc kubenswrapper[4955]: I0202 13:14:27.934306 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d54dz" Feb 02 13:14:27 crc kubenswrapper[4955]: I0202 13:14:27.936716 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 02 13:14:27 crc kubenswrapper[4955]: I0202 13:14:27.938673 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-68p66" Feb 02 13:14:27 crc kubenswrapper[4955]: I0202 13:14:27.939393 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 02 13:14:27 crc kubenswrapper[4955]: I0202 13:14:27.939648 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-d54dz"] Feb 02 13:14:28 crc kubenswrapper[4955]: I0202 13:14:28.103723 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22stt\" (UniqueName: \"kubernetes.io/projected/2b39ee63-c696-454c-b538-688fa0a6e7fb-kube-api-access-22stt\") pod \"openstack-operator-index-d54dz\" (UID: \"2b39ee63-c696-454c-b538-688fa0a6e7fb\") " pod="openstack-operators/openstack-operator-index-d54dz" Feb 02 13:14:28 crc kubenswrapper[4955]: I0202 13:14:28.204831 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22stt\" (UniqueName: \"kubernetes.io/projected/2b39ee63-c696-454c-b538-688fa0a6e7fb-kube-api-access-22stt\") pod \"openstack-operator-index-d54dz\" (UID: \"2b39ee63-c696-454c-b538-688fa0a6e7fb\") " pod="openstack-operators/openstack-operator-index-d54dz" Feb 02 13:14:28 crc kubenswrapper[4955]: I0202 13:14:28.224337 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22stt\" (UniqueName: \"kubernetes.io/projected/2b39ee63-c696-454c-b538-688fa0a6e7fb-kube-api-access-22stt\") pod \"openstack-operator-index-d54dz\" (UID: \"2b39ee63-c696-454c-b538-688fa0a6e7fb\") " pod="openstack-operators/openstack-operator-index-d54dz" Feb 02 13:14:28 crc kubenswrapper[4955]: I0202 13:14:28.261483 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d54dz" Feb 02 13:14:28 crc kubenswrapper[4955]: I0202 13:14:28.461372 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-d54dz"] Feb 02 13:14:28 crc kubenswrapper[4955]: I0202 13:14:28.545409 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d54dz" event={"ID":"2b39ee63-c696-454c-b538-688fa0a6e7fb","Type":"ContainerStarted","Data":"19d64df6f9978435962071c342eac7b2b9120cbd9aead31d81b9d2caaaf7398f"} Feb 02 13:14:28 crc kubenswrapper[4955]: I0202 13:14:28.748964 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:28 crc kubenswrapper[4955]: I0202 13:14:28.790631 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:31 crc kubenswrapper[4955]: I0202 13:14:31.305428 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-d54dz"] Feb 02 13:14:31 crc kubenswrapper[4955]: I0202 13:14:31.912092 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cqqqp"] Feb 02 13:14:31 crc kubenswrapper[4955]: I0202 13:14:31.912810 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cqqqp" Feb 02 13:14:31 crc kubenswrapper[4955]: I0202 13:14:31.923866 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cqqqp"] Feb 02 13:14:32 crc kubenswrapper[4955]: I0202 13:14:32.057006 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhn9v\" (UniqueName: \"kubernetes.io/projected/9f07b7c4-c1b4-4ac7-a623-9cbf6885cfed-kube-api-access-zhn9v\") pod \"openstack-operator-index-cqqqp\" (UID: \"9f07b7c4-c1b4-4ac7-a623-9cbf6885cfed\") " pod="openstack-operators/openstack-operator-index-cqqqp" Feb 02 13:14:32 crc kubenswrapper[4955]: I0202 13:14:32.158388 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhn9v\" (UniqueName: \"kubernetes.io/projected/9f07b7c4-c1b4-4ac7-a623-9cbf6885cfed-kube-api-access-zhn9v\") pod \"openstack-operator-index-cqqqp\" (UID: \"9f07b7c4-c1b4-4ac7-a623-9cbf6885cfed\") " pod="openstack-operators/openstack-operator-index-cqqqp" Feb 02 13:14:32 crc kubenswrapper[4955]: I0202 13:14:32.176599 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhn9v\" (UniqueName: \"kubernetes.io/projected/9f07b7c4-c1b4-4ac7-a623-9cbf6885cfed-kube-api-access-zhn9v\") pod \"openstack-operator-index-cqqqp\" (UID: \"9f07b7c4-c1b4-4ac7-a623-9cbf6885cfed\") " pod="openstack-operators/openstack-operator-index-cqqqp" Feb 02 13:14:32 crc kubenswrapper[4955]: I0202 13:14:32.234785 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cqqqp" Feb 02 13:14:32 crc kubenswrapper[4955]: I0202 13:14:32.584297 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d54dz" event={"ID":"2b39ee63-c696-454c-b538-688fa0a6e7fb","Type":"ContainerStarted","Data":"901faf5c0298278039aa0038957d6a42f952310e133a3be45371b62deaff2cec"} Feb 02 13:14:32 crc kubenswrapper[4955]: I0202 13:14:32.584399 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-d54dz" podUID="2b39ee63-c696-454c-b538-688fa0a6e7fb" containerName="registry-server" containerID="cri-o://901faf5c0298278039aa0038957d6a42f952310e133a3be45371b62deaff2cec" gracePeriod=2 Feb 02 13:14:32 crc kubenswrapper[4955]: I0202 13:14:32.601261 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-d54dz" podStartSLOduration=2.424165999 podStartE2EDuration="5.601239125s" podCreationTimestamp="2026-02-02 13:14:27 +0000 UTC" firstStartedPulling="2026-02-02 13:14:28.470442317 +0000 UTC m=+719.382778767" lastFinishedPulling="2026-02-02 13:14:31.647515443 +0000 UTC m=+722.559851893" observedRunningTime="2026-02-02 13:14:32.598905248 +0000 UTC m=+723.511241698" watchObservedRunningTime="2026-02-02 13:14:32.601239125 +0000 UTC m=+723.513575575" Feb 02 13:14:32 crc kubenswrapper[4955]: I0202 13:14:32.642455 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cqqqp"] Feb 02 13:14:32 crc kubenswrapper[4955]: I0202 13:14:32.927084 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d54dz" Feb 02 13:14:33 crc kubenswrapper[4955]: I0202 13:14:33.017328 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:14:33 crc kubenswrapper[4955]: I0202 13:14:33.017401 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:14:33 crc kubenswrapper[4955]: I0202 13:14:33.067539 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22stt\" (UniqueName: \"kubernetes.io/projected/2b39ee63-c696-454c-b538-688fa0a6e7fb-kube-api-access-22stt\") pod \"2b39ee63-c696-454c-b538-688fa0a6e7fb\" (UID: \"2b39ee63-c696-454c-b538-688fa0a6e7fb\") " Feb 02 13:14:33 crc kubenswrapper[4955]: I0202 13:14:33.073634 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b39ee63-c696-454c-b538-688fa0a6e7fb-kube-api-access-22stt" (OuterVolumeSpecName: "kube-api-access-22stt") pod "2b39ee63-c696-454c-b538-688fa0a6e7fb" (UID: "2b39ee63-c696-454c-b538-688fa0a6e7fb"). InnerVolumeSpecName "kube-api-access-22stt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:14:33 crc kubenswrapper[4955]: I0202 13:14:33.169487 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22stt\" (UniqueName: \"kubernetes.io/projected/2b39ee63-c696-454c-b538-688fa0a6e7fb-kube-api-access-22stt\") on node \"crc\" DevicePath \"\"" Feb 02 13:14:33 crc kubenswrapper[4955]: I0202 13:14:33.594263 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cqqqp" event={"ID":"9f07b7c4-c1b4-4ac7-a623-9cbf6885cfed","Type":"ContainerStarted","Data":"333507ceec6170ab11b316e4a40390d57230bc7b3a25d4b0e7aeacf88e17d033"} Feb 02 13:14:33 crc kubenswrapper[4955]: I0202 13:14:33.595340 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cqqqp" event={"ID":"9f07b7c4-c1b4-4ac7-a623-9cbf6885cfed","Type":"ContainerStarted","Data":"64f7e9c7fa2f59f39b323855c280b81f1a94057af3fd26385840c69e68cf6f36"} Feb 02 13:14:33 crc kubenswrapper[4955]: I0202 13:14:33.596578 4955 generic.go:334] "Generic (PLEG): container finished" podID="2b39ee63-c696-454c-b538-688fa0a6e7fb" containerID="901faf5c0298278039aa0038957d6a42f952310e133a3be45371b62deaff2cec" exitCode=0 Feb 02 13:14:33 crc kubenswrapper[4955]: I0202 13:14:33.596666 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d54dz" event={"ID":"2b39ee63-c696-454c-b538-688fa0a6e7fb","Type":"ContainerDied","Data":"901faf5c0298278039aa0038957d6a42f952310e133a3be45371b62deaff2cec"} Feb 02 13:14:33 crc kubenswrapper[4955]: I0202 13:14:33.596693 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d54dz" event={"ID":"2b39ee63-c696-454c-b538-688fa0a6e7fb","Type":"ContainerDied","Data":"19d64df6f9978435962071c342eac7b2b9120cbd9aead31d81b9d2caaaf7398f"} Feb 02 13:14:33 crc kubenswrapper[4955]: I0202 13:14:33.596713 4955 scope.go:117] "RemoveContainer" containerID="901faf5c0298278039aa0038957d6a42f952310e133a3be45371b62deaff2cec" Feb 02 13:14:33 crc kubenswrapper[4955]: I0202 13:14:33.596814 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d54dz" Feb 02 13:14:33 crc kubenswrapper[4955]: I0202 13:14:33.617250 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cqqqp" podStartSLOduration=2.575831869 podStartE2EDuration="2.617222356s" podCreationTimestamp="2026-02-02 13:14:31 +0000 UTC" firstStartedPulling="2026-02-02 13:14:32.684626913 +0000 UTC m=+723.596963363" lastFinishedPulling="2026-02-02 13:14:32.7260174 +0000 UTC m=+723.638353850" observedRunningTime="2026-02-02 13:14:33.614816899 +0000 UTC m=+724.527153369" watchObservedRunningTime="2026-02-02 13:14:33.617222356 +0000 UTC m=+724.529558806" Feb 02 13:14:33 crc kubenswrapper[4955]: I0202 13:14:33.625284 4955 scope.go:117] "RemoveContainer" containerID="901faf5c0298278039aa0038957d6a42f952310e133a3be45371b62deaff2cec" Feb 02 13:14:33 crc kubenswrapper[4955]: E0202 13:14:33.625933 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"901faf5c0298278039aa0038957d6a42f952310e133a3be45371b62deaff2cec\": container with ID starting with 901faf5c0298278039aa0038957d6a42f952310e133a3be45371b62deaff2cec not found: ID does not exist" containerID="901faf5c0298278039aa0038957d6a42f952310e133a3be45371b62deaff2cec" Feb 02 13:14:33 crc kubenswrapper[4955]: I0202 13:14:33.626019 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"901faf5c0298278039aa0038957d6a42f952310e133a3be45371b62deaff2cec"} err="failed to get container status \"901faf5c0298278039aa0038957d6a42f952310e133a3be45371b62deaff2cec\": rpc error: code = NotFound desc = could not find container \"901faf5c0298278039aa0038957d6a42f952310e133a3be45371b62deaff2cec\": container with ID starting with 901faf5c0298278039aa0038957d6a42f952310e133a3be45371b62deaff2cec not found: ID does not exist" Feb 02 13:14:33 crc kubenswrapper[4955]: I0202 13:14:33.640006 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-d54dz"] Feb 02 13:14:33 crc kubenswrapper[4955]: I0202 13:14:33.643841 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-d54dz"] Feb 02 13:14:33 crc kubenswrapper[4955]: I0202 13:14:33.723699 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b39ee63-c696-454c-b538-688fa0a6e7fb" path="/var/lib/kubelet/pods/2b39ee63-c696-454c-b538-688fa0a6e7fb/volumes" Feb 02 13:14:33 crc kubenswrapper[4955]: I0202 13:14:33.750512 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-fsws2" Feb 02 13:14:33 crc kubenswrapper[4955]: I0202 13:14:33.755036 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-mxh79" Feb 02 13:14:42 crc kubenswrapper[4955]: I0202 13:14:42.235685 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-cqqqp" Feb 02 13:14:42 crc kubenswrapper[4955]: I0202 13:14:42.236200 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-cqqqp" Feb 02 13:14:42 crc kubenswrapper[4955]: I0202 13:14:42.266632 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-cqqqp" Feb 02 13:14:42 crc kubenswrapper[4955]: I0202 13:14:42.672623 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-cqqqp" Feb 02 13:14:47 crc kubenswrapper[4955]: I0202 13:14:47.726567 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7"] Feb 02 13:14:47 crc kubenswrapper[4955]: E0202 13:14:47.727016 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b39ee63-c696-454c-b538-688fa0a6e7fb" containerName="registry-server" Feb 02 13:14:47 crc kubenswrapper[4955]: I0202 13:14:47.727027 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b39ee63-c696-454c-b538-688fa0a6e7fb" containerName="registry-server" Feb 02 13:14:47 crc kubenswrapper[4955]: I0202 13:14:47.727135 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b39ee63-c696-454c-b538-688fa0a6e7fb" containerName="registry-server" Feb 02 13:14:47 crc kubenswrapper[4955]: I0202 13:14:47.727967 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7" Feb 02 13:14:47 crc kubenswrapper[4955]: I0202 13:14:47.729940 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-xtsrk" Feb 02 13:14:47 crc kubenswrapper[4955]: I0202 13:14:47.731533 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7"] Feb 02 13:14:47 crc kubenswrapper[4955]: I0202 13:14:47.754909 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/89286ac8-71fd-4cef-b2d0-ef2ca01a87e9-util\") pod \"83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7\" (UID: \"89286ac8-71fd-4cef-b2d0-ef2ca01a87e9\") " pod="openstack-operators/83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7" Feb 02 13:14:47 crc kubenswrapper[4955]: I0202 13:14:47.754986 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcz2d\" (UniqueName: \"kubernetes.io/projected/89286ac8-71fd-4cef-b2d0-ef2ca01a87e9-kube-api-access-pcz2d\") pod \"83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7\" (UID: \"89286ac8-71fd-4cef-b2d0-ef2ca01a87e9\") " pod="openstack-operators/83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7" Feb 02 13:14:47 crc kubenswrapper[4955]: I0202 13:14:47.755017 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/89286ac8-71fd-4cef-b2d0-ef2ca01a87e9-bundle\") pod \"83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7\" (UID: \"89286ac8-71fd-4cef-b2d0-ef2ca01a87e9\") " pod="openstack-operators/83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7" Feb 02 13:14:47 crc kubenswrapper[4955]: I0202 13:14:47.855780 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcz2d\" (UniqueName: \"kubernetes.io/projected/89286ac8-71fd-4cef-b2d0-ef2ca01a87e9-kube-api-access-pcz2d\") pod \"83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7\" (UID: \"89286ac8-71fd-4cef-b2d0-ef2ca01a87e9\") " pod="openstack-operators/83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7" Feb 02 13:14:47 crc kubenswrapper[4955]: I0202 13:14:47.856042 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/89286ac8-71fd-4cef-b2d0-ef2ca01a87e9-bundle\") pod \"83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7\" (UID: \"89286ac8-71fd-4cef-b2d0-ef2ca01a87e9\") " pod="openstack-operators/83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7" Feb 02 13:14:47 crc kubenswrapper[4955]: I0202 13:14:47.856186 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/89286ac8-71fd-4cef-b2d0-ef2ca01a87e9-util\") pod \"83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7\" (UID: \"89286ac8-71fd-4cef-b2d0-ef2ca01a87e9\") " pod="openstack-operators/83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7" Feb 02 13:14:47 crc kubenswrapper[4955]: I0202 13:14:47.856935 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/89286ac8-71fd-4cef-b2d0-ef2ca01a87e9-util\") pod \"83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7\" (UID: \"89286ac8-71fd-4cef-b2d0-ef2ca01a87e9\") " pod="openstack-operators/83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7" Feb 02 13:14:47 crc kubenswrapper[4955]: I0202 13:14:47.856945 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/89286ac8-71fd-4cef-b2d0-ef2ca01a87e9-bundle\") pod \"83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7\" (UID: \"89286ac8-71fd-4cef-b2d0-ef2ca01a87e9\") " pod="openstack-operators/83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7" Feb 02 13:14:47 crc kubenswrapper[4955]: I0202 13:14:47.879094 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcz2d\" (UniqueName: \"kubernetes.io/projected/89286ac8-71fd-4cef-b2d0-ef2ca01a87e9-kube-api-access-pcz2d\") pod \"83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7\" (UID: \"89286ac8-71fd-4cef-b2d0-ef2ca01a87e9\") " pod="openstack-operators/83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7" Feb 02 13:14:48 crc kubenswrapper[4955]: I0202 13:14:48.066950 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7" Feb 02 13:14:48 crc kubenswrapper[4955]: I0202 13:14:48.459924 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7"] Feb 02 13:14:48 crc kubenswrapper[4955]: I0202 13:14:48.698731 4955 generic.go:334] "Generic (PLEG): container finished" podID="89286ac8-71fd-4cef-b2d0-ef2ca01a87e9" containerID="912a4c80f19f3efb11039a98b958c3516d6b6396a91119df326661a16b235a48" exitCode=0 Feb 02 13:14:48 crc kubenswrapper[4955]: I0202 13:14:48.698805 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7" event={"ID":"89286ac8-71fd-4cef-b2d0-ef2ca01a87e9","Type":"ContainerDied","Data":"912a4c80f19f3efb11039a98b958c3516d6b6396a91119df326661a16b235a48"} Feb 02 13:14:48 crc kubenswrapper[4955]: I0202 13:14:48.698833 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7" event={"ID":"89286ac8-71fd-4cef-b2d0-ef2ca01a87e9","Type":"ContainerStarted","Data":"aea991a7e526891753bcfd9167c14fc45b8da027f14832b284cd4ced46a455fb"} Feb 02 13:14:49 crc kubenswrapper[4955]: I0202 13:14:49.708151 4955 generic.go:334] "Generic (PLEG): container finished" podID="89286ac8-71fd-4cef-b2d0-ef2ca01a87e9" containerID="6c4886d9174610054c85d4c329432a53a64bbf621fba85a7b8ad1932d5c6c3ec" exitCode=0 Feb 02 13:14:49 crc kubenswrapper[4955]: I0202 13:14:49.708268 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7" event={"ID":"89286ac8-71fd-4cef-b2d0-ef2ca01a87e9","Type":"ContainerDied","Data":"6c4886d9174610054c85d4c329432a53a64bbf621fba85a7b8ad1932d5c6c3ec"} Feb 02 13:14:50 crc kubenswrapper[4955]: I0202 13:14:50.715276 4955 generic.go:334] "Generic (PLEG): container finished" podID="89286ac8-71fd-4cef-b2d0-ef2ca01a87e9" containerID="725870f0407df0f4017efc53ed060a78b5ee17a0b2c513a234f1a092d741b5ed" exitCode=0 Feb 02 13:14:50 crc kubenswrapper[4955]: I0202 13:14:50.715347 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7" event={"ID":"89286ac8-71fd-4cef-b2d0-ef2ca01a87e9","Type":"ContainerDied","Data":"725870f0407df0f4017efc53ed060a78b5ee17a0b2c513a234f1a092d741b5ed"} Feb 02 13:14:51 crc kubenswrapper[4955]: I0202 13:14:51.949226 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7" Feb 02 13:14:52 crc kubenswrapper[4955]: I0202 13:14:52.021636 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcz2d\" (UniqueName: \"kubernetes.io/projected/89286ac8-71fd-4cef-b2d0-ef2ca01a87e9-kube-api-access-pcz2d\") pod \"89286ac8-71fd-4cef-b2d0-ef2ca01a87e9\" (UID: \"89286ac8-71fd-4cef-b2d0-ef2ca01a87e9\") " Feb 02 13:14:52 crc kubenswrapper[4955]: I0202 13:14:52.021680 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/89286ac8-71fd-4cef-b2d0-ef2ca01a87e9-util\") pod \"89286ac8-71fd-4cef-b2d0-ef2ca01a87e9\" (UID: \"89286ac8-71fd-4cef-b2d0-ef2ca01a87e9\") " Feb 02 13:14:52 crc kubenswrapper[4955]: I0202 13:14:52.021757 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/89286ac8-71fd-4cef-b2d0-ef2ca01a87e9-bundle\") pod \"89286ac8-71fd-4cef-b2d0-ef2ca01a87e9\" (UID: \"89286ac8-71fd-4cef-b2d0-ef2ca01a87e9\") " Feb 02 13:14:52 crc kubenswrapper[4955]: I0202 13:14:52.022364 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89286ac8-71fd-4cef-b2d0-ef2ca01a87e9-bundle" (OuterVolumeSpecName: "bundle") pod "89286ac8-71fd-4cef-b2d0-ef2ca01a87e9" (UID: "89286ac8-71fd-4cef-b2d0-ef2ca01a87e9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:14:52 crc kubenswrapper[4955]: I0202 13:14:52.028745 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89286ac8-71fd-4cef-b2d0-ef2ca01a87e9-kube-api-access-pcz2d" (OuterVolumeSpecName: "kube-api-access-pcz2d") pod "89286ac8-71fd-4cef-b2d0-ef2ca01a87e9" (UID: "89286ac8-71fd-4cef-b2d0-ef2ca01a87e9"). InnerVolumeSpecName "kube-api-access-pcz2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:14:52 crc kubenswrapper[4955]: I0202 13:14:52.037657 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89286ac8-71fd-4cef-b2d0-ef2ca01a87e9-util" (OuterVolumeSpecName: "util") pod "89286ac8-71fd-4cef-b2d0-ef2ca01a87e9" (UID: "89286ac8-71fd-4cef-b2d0-ef2ca01a87e9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:14:52 crc kubenswrapper[4955]: I0202 13:14:52.122979 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcz2d\" (UniqueName: \"kubernetes.io/projected/89286ac8-71fd-4cef-b2d0-ef2ca01a87e9-kube-api-access-pcz2d\") on node \"crc\" DevicePath \"\"" Feb 02 13:14:52 crc kubenswrapper[4955]: I0202 13:14:52.123024 4955 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/89286ac8-71fd-4cef-b2d0-ef2ca01a87e9-util\") on node \"crc\" DevicePath \"\"" Feb 02 13:14:52 crc kubenswrapper[4955]: I0202 13:14:52.123035 4955 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/89286ac8-71fd-4cef-b2d0-ef2ca01a87e9-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:14:52 crc kubenswrapper[4955]: I0202 13:14:52.729687 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7" event={"ID":"89286ac8-71fd-4cef-b2d0-ef2ca01a87e9","Type":"ContainerDied","Data":"aea991a7e526891753bcfd9167c14fc45b8da027f14832b284cd4ced46a455fb"} Feb 02 13:14:52 crc kubenswrapper[4955]: I0202 13:14:52.729736 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aea991a7e526891753bcfd9167c14fc45b8da027f14832b284cd4ced46a455fb" Feb 02 13:14:52 crc kubenswrapper[4955]: I0202 13:14:52.729742 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7" Feb 02 13:14:59 crc kubenswrapper[4955]: I0202 13:14:59.853439 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-75bcbdb4c8-sfqbx"] Feb 02 13:14:59 crc kubenswrapper[4955]: E0202 13:14:59.854291 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89286ac8-71fd-4cef-b2d0-ef2ca01a87e9" containerName="pull" Feb 02 13:14:59 crc kubenswrapper[4955]: I0202 13:14:59.854307 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="89286ac8-71fd-4cef-b2d0-ef2ca01a87e9" containerName="pull" Feb 02 13:14:59 crc kubenswrapper[4955]: E0202 13:14:59.854324 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89286ac8-71fd-4cef-b2d0-ef2ca01a87e9" containerName="util" Feb 02 13:14:59 crc kubenswrapper[4955]: I0202 13:14:59.854331 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="89286ac8-71fd-4cef-b2d0-ef2ca01a87e9" containerName="util" Feb 02 13:14:59 crc kubenswrapper[4955]: E0202 13:14:59.854345 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89286ac8-71fd-4cef-b2d0-ef2ca01a87e9" containerName="extract" Feb 02 13:14:59 crc kubenswrapper[4955]: I0202 13:14:59.854353 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="89286ac8-71fd-4cef-b2d0-ef2ca01a87e9" containerName="extract" Feb 02 13:14:59 crc kubenswrapper[4955]: I0202 13:14:59.854480 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="89286ac8-71fd-4cef-b2d0-ef2ca01a87e9" containerName="extract" Feb 02 13:14:59 crc kubenswrapper[4955]: I0202 13:14:59.855083 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-75bcbdb4c8-sfqbx" Feb 02 13:14:59 crc kubenswrapper[4955]: I0202 13:14:59.858123 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-8jdqw" Feb 02 13:14:59 crc kubenswrapper[4955]: I0202 13:14:59.881021 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-75bcbdb4c8-sfqbx"] Feb 02 13:14:59 crc kubenswrapper[4955]: I0202 13:14:59.927951 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lrh5\" (UniqueName: \"kubernetes.io/projected/70ae8d8e-2a48-4fb8-afed-8a3cf2234982-kube-api-access-6lrh5\") pod \"openstack-operator-controller-init-75bcbdb4c8-sfqbx\" (UID: \"70ae8d8e-2a48-4fb8-afed-8a3cf2234982\") " pod="openstack-operators/openstack-operator-controller-init-75bcbdb4c8-sfqbx" Feb 02 13:15:00 crc kubenswrapper[4955]: I0202 13:15:00.029653 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lrh5\" (UniqueName: \"kubernetes.io/projected/70ae8d8e-2a48-4fb8-afed-8a3cf2234982-kube-api-access-6lrh5\") pod \"openstack-operator-controller-init-75bcbdb4c8-sfqbx\" (UID: \"70ae8d8e-2a48-4fb8-afed-8a3cf2234982\") " pod="openstack-operators/openstack-operator-controller-init-75bcbdb4c8-sfqbx" Feb 02 13:15:00 crc kubenswrapper[4955]: I0202 13:15:00.056934 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lrh5\" (UniqueName: \"kubernetes.io/projected/70ae8d8e-2a48-4fb8-afed-8a3cf2234982-kube-api-access-6lrh5\") pod \"openstack-operator-controller-init-75bcbdb4c8-sfqbx\" (UID: \"70ae8d8e-2a48-4fb8-afed-8a3cf2234982\") " pod="openstack-operators/openstack-operator-controller-init-75bcbdb4c8-sfqbx" Feb 02 13:15:00 crc kubenswrapper[4955]: I0202 13:15:00.175283 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-75bcbdb4c8-sfqbx" Feb 02 13:15:00 crc kubenswrapper[4955]: I0202 13:15:00.226075 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500635-h58p4"] Feb 02 13:15:00 crc kubenswrapper[4955]: I0202 13:15:00.226837 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-h58p4" Feb 02 13:15:00 crc kubenswrapper[4955]: I0202 13:15:00.229085 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 13:15:00 crc kubenswrapper[4955]: I0202 13:15:00.231538 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 13:15:00 crc kubenswrapper[4955]: I0202 13:15:00.237750 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500635-h58p4"] Feb 02 13:15:00 crc kubenswrapper[4955]: I0202 13:15:00.335144 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e4033bc-b573-436c-97c6-7c3ea8fc8c02-config-volume\") pod \"collect-profiles-29500635-h58p4\" (UID: \"7e4033bc-b573-436c-97c6-7c3ea8fc8c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-h58p4" Feb 02 13:15:00 crc kubenswrapper[4955]: I0202 13:15:00.335577 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e4033bc-b573-436c-97c6-7c3ea8fc8c02-secret-volume\") pod \"collect-profiles-29500635-h58p4\" (UID: \"7e4033bc-b573-436c-97c6-7c3ea8fc8c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-h58p4" Feb 02 13:15:00 crc kubenswrapper[4955]: I0202 13:15:00.335623 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7r79\" (UniqueName: \"kubernetes.io/projected/7e4033bc-b573-436c-97c6-7c3ea8fc8c02-kube-api-access-b7r79\") pod \"collect-profiles-29500635-h58p4\" (UID: \"7e4033bc-b573-436c-97c6-7c3ea8fc8c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-h58p4" Feb 02 13:15:00 crc kubenswrapper[4955]: I0202 13:15:00.437260 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7r79\" (UniqueName: \"kubernetes.io/projected/7e4033bc-b573-436c-97c6-7c3ea8fc8c02-kube-api-access-b7r79\") pod \"collect-profiles-29500635-h58p4\" (UID: \"7e4033bc-b573-436c-97c6-7c3ea8fc8c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-h58p4" Feb 02 13:15:00 crc kubenswrapper[4955]: I0202 13:15:00.437374 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e4033bc-b573-436c-97c6-7c3ea8fc8c02-config-volume\") pod \"collect-profiles-29500635-h58p4\" (UID: \"7e4033bc-b573-436c-97c6-7c3ea8fc8c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-h58p4" Feb 02 13:15:00 crc kubenswrapper[4955]: I0202 13:15:00.437601 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e4033bc-b573-436c-97c6-7c3ea8fc8c02-secret-volume\") pod \"collect-profiles-29500635-h58p4\" (UID: \"7e4033bc-b573-436c-97c6-7c3ea8fc8c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-h58p4" Feb 02 13:15:00 crc kubenswrapper[4955]: I0202 13:15:00.438305 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e4033bc-b573-436c-97c6-7c3ea8fc8c02-config-volume\") pod \"collect-profiles-29500635-h58p4\" (UID: \"7e4033bc-b573-436c-97c6-7c3ea8fc8c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-h58p4" Feb 02 13:15:00 crc kubenswrapper[4955]: I0202 13:15:00.449257 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e4033bc-b573-436c-97c6-7c3ea8fc8c02-secret-volume\") pod \"collect-profiles-29500635-h58p4\" (UID: \"7e4033bc-b573-436c-97c6-7c3ea8fc8c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-h58p4" Feb 02 13:15:00 crc kubenswrapper[4955]: I0202 13:15:00.454288 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7r79\" (UniqueName: \"kubernetes.io/projected/7e4033bc-b573-436c-97c6-7c3ea8fc8c02-kube-api-access-b7r79\") pod \"collect-profiles-29500635-h58p4\" (UID: \"7e4033bc-b573-436c-97c6-7c3ea8fc8c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-h58p4" Feb 02 13:15:00 crc kubenswrapper[4955]: I0202 13:15:00.554305 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-h58p4" Feb 02 13:15:00 crc kubenswrapper[4955]: I0202 13:15:00.776444 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-75bcbdb4c8-sfqbx"] Feb 02 13:15:00 crc kubenswrapper[4955]: I0202 13:15:00.824026 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-75bcbdb4c8-sfqbx" event={"ID":"70ae8d8e-2a48-4fb8-afed-8a3cf2234982","Type":"ContainerStarted","Data":"286224ec9bcbc6b9eb7af0d4804a2cfa8ad76404a8e592c0bb9716628e6746be"} Feb 02 13:15:01 crc kubenswrapper[4955]: I0202 13:15:01.071599 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500635-h58p4"] Feb 02 13:15:01 crc kubenswrapper[4955]: W0202 13:15:01.082431 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e4033bc_b573_436c_97c6_7c3ea8fc8c02.slice/crio-114b6aad03996282551bf459ef1ab68de37ed3cedb8ac6df4b5837f3ce30863a WatchSource:0}: Error finding container 114b6aad03996282551bf459ef1ab68de37ed3cedb8ac6df4b5837f3ce30863a: Status 404 returned error can't find the container with id 114b6aad03996282551bf459ef1ab68de37ed3cedb8ac6df4b5837f3ce30863a Feb 02 13:15:01 crc kubenswrapper[4955]: I0202 13:15:01.834058 4955 generic.go:334] "Generic (PLEG): container finished" podID="7e4033bc-b573-436c-97c6-7c3ea8fc8c02" containerID="f95eca64b1cc15ad05af7ef970ad0a6db4b37b5fc97b1f7f24a8c2a6a5006f88" exitCode=0 Feb 02 13:15:01 crc kubenswrapper[4955]: I0202 13:15:01.834228 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-h58p4" event={"ID":"7e4033bc-b573-436c-97c6-7c3ea8fc8c02","Type":"ContainerDied","Data":"f95eca64b1cc15ad05af7ef970ad0a6db4b37b5fc97b1f7f24a8c2a6a5006f88"} Feb 02 13:15:01 crc kubenswrapper[4955]: I0202 13:15:01.834393 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-h58p4" event={"ID":"7e4033bc-b573-436c-97c6-7c3ea8fc8c02","Type":"ContainerStarted","Data":"114b6aad03996282551bf459ef1ab68de37ed3cedb8ac6df4b5837f3ce30863a"} Feb 02 13:15:03 crc kubenswrapper[4955]: I0202 13:15:03.018174 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:15:03 crc kubenswrapper[4955]: I0202 13:15:03.018237 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:15:03 crc kubenswrapper[4955]: I0202 13:15:03.018278 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:15:03 crc kubenswrapper[4955]: I0202 13:15:03.020241 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5c24a4cc614a40516e0f58b2e903db33d318f5c1611a533b9d33712af008fe5"} pod="openshift-machine-config-operator/machine-config-daemon-6l62h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:15:03 crc kubenswrapper[4955]: I0202 13:15:03.020302 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" containerID="cri-o://c5c24a4cc614a40516e0f58b2e903db33d318f5c1611a533b9d33712af008fe5" gracePeriod=600 Feb 02 13:15:03 crc kubenswrapper[4955]: I0202 13:15:03.687404 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-h58p4" Feb 02 13:15:03 crc kubenswrapper[4955]: I0202 13:15:03.782974 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7r79\" (UniqueName: \"kubernetes.io/projected/7e4033bc-b573-436c-97c6-7c3ea8fc8c02-kube-api-access-b7r79\") pod \"7e4033bc-b573-436c-97c6-7c3ea8fc8c02\" (UID: \"7e4033bc-b573-436c-97c6-7c3ea8fc8c02\") " Feb 02 13:15:03 crc kubenswrapper[4955]: I0202 13:15:03.783093 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e4033bc-b573-436c-97c6-7c3ea8fc8c02-config-volume\") pod \"7e4033bc-b573-436c-97c6-7c3ea8fc8c02\" (UID: \"7e4033bc-b573-436c-97c6-7c3ea8fc8c02\") " Feb 02 13:15:03 crc kubenswrapper[4955]: I0202 13:15:03.783145 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e4033bc-b573-436c-97c6-7c3ea8fc8c02-secret-volume\") pod \"7e4033bc-b573-436c-97c6-7c3ea8fc8c02\" (UID: \"7e4033bc-b573-436c-97c6-7c3ea8fc8c02\") " Feb 02 13:15:03 crc kubenswrapper[4955]: I0202 13:15:03.794827 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e4033bc-b573-436c-97c6-7c3ea8fc8c02-config-volume" (OuterVolumeSpecName: "config-volume") pod "7e4033bc-b573-436c-97c6-7c3ea8fc8c02" (UID: "7e4033bc-b573-436c-97c6-7c3ea8fc8c02"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:15:03 crc kubenswrapper[4955]: I0202 13:15:03.799729 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e4033bc-b573-436c-97c6-7c3ea8fc8c02-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7e4033bc-b573-436c-97c6-7c3ea8fc8c02" (UID: "7e4033bc-b573-436c-97c6-7c3ea8fc8c02"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:15:03 crc kubenswrapper[4955]: I0202 13:15:03.811731 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e4033bc-b573-436c-97c6-7c3ea8fc8c02-kube-api-access-b7r79" (OuterVolumeSpecName: "kube-api-access-b7r79") pod "7e4033bc-b573-436c-97c6-7c3ea8fc8c02" (UID: "7e4033bc-b573-436c-97c6-7c3ea8fc8c02"). InnerVolumeSpecName "kube-api-access-b7r79". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:15:03 crc kubenswrapper[4955]: I0202 13:15:03.848920 4955 generic.go:334] "Generic (PLEG): container finished" podID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerID="c5c24a4cc614a40516e0f58b2e903db33d318f5c1611a533b9d33712af008fe5" exitCode=0 Feb 02 13:15:03 crc kubenswrapper[4955]: I0202 13:15:03.848983 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerDied","Data":"c5c24a4cc614a40516e0f58b2e903db33d318f5c1611a533b9d33712af008fe5"} Feb 02 13:15:03 crc kubenswrapper[4955]: I0202 13:15:03.849013 4955 scope.go:117] "RemoveContainer" containerID="15ec1b1ba75d775d8ebc23447ae7b707fd98515f86f18a1cbc9275eaecb69192" Feb 02 13:15:03 crc kubenswrapper[4955]: I0202 13:15:03.850610 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-h58p4" event={"ID":"7e4033bc-b573-436c-97c6-7c3ea8fc8c02","Type":"ContainerDied","Data":"114b6aad03996282551bf459ef1ab68de37ed3cedb8ac6df4b5837f3ce30863a"} Feb 02 13:15:03 crc kubenswrapper[4955]: I0202 13:15:03.850627 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="114b6aad03996282551bf459ef1ab68de37ed3cedb8ac6df4b5837f3ce30863a" Feb 02 13:15:03 crc kubenswrapper[4955]: I0202 13:15:03.850671 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-h58p4" Feb 02 13:15:03 crc kubenswrapper[4955]: I0202 13:15:03.884881 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7r79\" (UniqueName: \"kubernetes.io/projected/7e4033bc-b573-436c-97c6-7c3ea8fc8c02-kube-api-access-b7r79\") on node \"crc\" DevicePath \"\"" Feb 02 13:15:03 crc kubenswrapper[4955]: I0202 13:15:03.884913 4955 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e4033bc-b573-436c-97c6-7c3ea8fc8c02-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 13:15:03 crc kubenswrapper[4955]: I0202 13:15:03.884923 4955 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e4033bc-b573-436c-97c6-7c3ea8fc8c02-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 13:15:06 crc kubenswrapper[4955]: I0202 13:15:06.874956 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-75bcbdb4c8-sfqbx" event={"ID":"70ae8d8e-2a48-4fb8-afed-8a3cf2234982","Type":"ContainerStarted","Data":"18ba5a571e43912268559f98e53d26d261a30df556d06631a07ae5bdaeb6f9cd"} Feb 02 13:15:06 crc kubenswrapper[4955]: I0202 13:15:06.875533 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-75bcbdb4c8-sfqbx" Feb 02 13:15:06 crc kubenswrapper[4955]: I0202 13:15:06.877375 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerStarted","Data":"2cd1b5f598a7c72d423d2d4f07c02704decf6f32b1b11d2eaf56ffcde03b7e1b"} Feb 02 13:15:06 crc kubenswrapper[4955]: I0202 13:15:06.919410 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-75bcbdb4c8-sfqbx" podStartSLOduration=2.704953426 podStartE2EDuration="7.919390305s" podCreationTimestamp="2026-02-02 13:14:59 +0000 UTC" firstStartedPulling="2026-02-02 13:15:00.786101553 +0000 UTC m=+751.698438003" lastFinishedPulling="2026-02-02 13:15:06.000538422 +0000 UTC m=+756.912874882" observedRunningTime="2026-02-02 13:15:06.904055785 +0000 UTC m=+757.816392275" watchObservedRunningTime="2026-02-02 13:15:06.919390305 +0000 UTC m=+757.831726755" Feb 02 13:15:14 crc kubenswrapper[4955]: I0202 13:15:14.410348 4955 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 13:15:20 crc kubenswrapper[4955]: I0202 13:15:20.177843 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-75bcbdb4c8-sfqbx" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.470530 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sd58m"] Feb 02 13:15:39 crc kubenswrapper[4955]: E0202 13:15:39.472205 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e4033bc-b573-436c-97c6-7c3ea8fc8c02" containerName="collect-profiles" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.472228 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e4033bc-b573-436c-97c6-7c3ea8fc8c02" containerName="collect-profiles" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.472444 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e4033bc-b573-436c-97c6-7c3ea8fc8c02" containerName="collect-profiles" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.473414 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sd58m" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.475081 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-4hm2s" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.477021 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-ts8vb"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.478193 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-ts8vb" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.480092 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-qnbjq" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.482626 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sd58m"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.493110 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-ts8vb"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.512683 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-m2vf4"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.514719 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-m2vf4" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.517170 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-btnwl" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.538164 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-m2vf4"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.546990 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-9jq6d"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.547814 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-9jq6d" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.549463 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-4gnsk" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.554498 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx44h\" (UniqueName: \"kubernetes.io/projected/7b0df3b7-68cf-4cb0-94b0-69b394da89c5-kube-api-access-gx44h\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-sd58m\" (UID: \"7b0df3b7-68cf-4cb0-94b0-69b394da89c5\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sd58m" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.584284 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-7zjp6"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.585310 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7zjp6" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.588391 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-n9ll2" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.602064 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-9jq6d"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.608811 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-7zjp6"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.632747 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-99z75"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.633620 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-99z75" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.635912 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-gckpj" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.642429 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-cphc8"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.643508 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-cphc8" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.647045 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.647310 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-r7wms" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.652631 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-99z75"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.657370 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx44h\" (UniqueName: \"kubernetes.io/projected/7b0df3b7-68cf-4cb0-94b0-69b394da89c5-kube-api-access-gx44h\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-sd58m\" (UID: \"7b0df3b7-68cf-4cb0-94b0-69b394da89c5\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sd58m" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.657465 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn6bt\" (UniqueName: \"kubernetes.io/projected/acac6a68-fe33-41eb-8f49-0fd47cc4f0d4-kube-api-access-xn6bt\") pod \"designate-operator-controller-manager-6d9697b7f4-m2vf4\" (UID: \"acac6a68-fe33-41eb-8f49-0fd47cc4f0d4\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-m2vf4" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.657536 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5shtj\" (UniqueName: \"kubernetes.io/projected/0e209e55-35cd-418f-902b-c16a5992677e-kube-api-access-5shtj\") pod \"cinder-operator-controller-manager-8d874c8fc-ts8vb\" (UID: \"0e209e55-35cd-418f-902b-c16a5992677e\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-ts8vb" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.657621 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srlgr\" (UniqueName: \"kubernetes.io/projected/cc234403-1bdb-40c8-a931-62b193347ae7-kube-api-access-srlgr\") pod \"glance-operator-controller-manager-8886f4c47-9jq6d\" (UID: \"cc234403-1bdb-40c8-a931-62b193347ae7\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-9jq6d" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.675012 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vbqw7"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.675869 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vbqw7" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.678550 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-924jl" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.688015 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx44h\" (UniqueName: \"kubernetes.io/projected/7b0df3b7-68cf-4cb0-94b0-69b394da89c5-kube-api-access-gx44h\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-sd58m\" (UID: \"7b0df3b7-68cf-4cb0-94b0-69b394da89c5\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sd58m" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.707234 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-cphc8"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.716022 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-6f8mt"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.716836 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-6f8mt" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.720844 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-rsgxr" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.728313 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vbqw7"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.733023 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-2xj4n"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.734001 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2xj4n" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.738105 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-t524s" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.738357 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-6f8mt"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.745344 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-2xj4n"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.758694 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-bw6fj"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.759704 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bw6fj" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.760030 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn6bt\" (UniqueName: \"kubernetes.io/projected/acac6a68-fe33-41eb-8f49-0fd47cc4f0d4-kube-api-access-xn6bt\") pod \"designate-operator-controller-manager-6d9697b7f4-m2vf4\" (UID: \"acac6a68-fe33-41eb-8f49-0fd47cc4f0d4\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-m2vf4" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.760088 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b61aade3-b2b3-4a5f-9862-a2018e56ea03-cert\") pod \"infra-operator-controller-manager-79955696d6-cphc8\" (UID: \"b61aade3-b2b3-4a5f-9862-a2018e56ea03\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-cphc8" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.760133 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb9rg\" (UniqueName: \"kubernetes.io/projected/3ad669b6-5937-4a7a-9d0b-b54da1542c6f-kube-api-access-xb9rg\") pod \"horizon-operator-controller-manager-5fb775575f-99z75\" (UID: \"3ad669b6-5937-4a7a-9d0b-b54da1542c6f\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-99z75" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.760166 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k424j\" (UniqueName: \"kubernetes.io/projected/b61aade3-b2b3-4a5f-9862-a2018e56ea03-kube-api-access-k424j\") pod \"infra-operator-controller-manager-79955696d6-cphc8\" (UID: \"b61aade3-b2b3-4a5f-9862-a2018e56ea03\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-cphc8" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.760189 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9nfb\" (UniqueName: \"kubernetes.io/projected/f25860b5-436a-486b-9d7d-065f19ac7f68-kube-api-access-d9nfb\") pod \"heat-operator-controller-manager-69d6db494d-7zjp6\" (UID: \"f25860b5-436a-486b-9d7d-065f19ac7f68\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7zjp6" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.760216 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5shtj\" (UniqueName: \"kubernetes.io/projected/0e209e55-35cd-418f-902b-c16a5992677e-kube-api-access-5shtj\") pod \"cinder-operator-controller-manager-8d874c8fc-ts8vb\" (UID: \"0e209e55-35cd-418f-902b-c16a5992677e\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-ts8vb" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.760263 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srlgr\" (UniqueName: \"kubernetes.io/projected/cc234403-1bdb-40c8-a931-62b193347ae7-kube-api-access-srlgr\") pod \"glance-operator-controller-manager-8886f4c47-9jq6d\" (UID: \"cc234403-1bdb-40c8-a931-62b193347ae7\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-9jq6d" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.760288 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kvzx\" (UniqueName: \"kubernetes.io/projected/43da80bd-2db6-4ee2-becb-fb97aa4e41bf-kube-api-access-7kvzx\") pod \"ironic-operator-controller-manager-5f4b8bd54d-vbqw7\" (UID: \"43da80bd-2db6-4ee2-becb-fb97aa4e41bf\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vbqw7" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.763586 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-mnp89" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.774226 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-bw6fj"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.802453 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn6bt\" (UniqueName: \"kubernetes.io/projected/acac6a68-fe33-41eb-8f49-0fd47cc4f0d4-kube-api-access-xn6bt\") pod \"designate-operator-controller-manager-6d9697b7f4-m2vf4\" (UID: \"acac6a68-fe33-41eb-8f49-0fd47cc4f0d4\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-m2vf4" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.810722 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sd58m" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.815877 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-ndfws"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.817713 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-ndfws" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.819820 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-rpp8s" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.820175 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5shtj\" (UniqueName: \"kubernetes.io/projected/0e209e55-35cd-418f-902b-c16a5992677e-kube-api-access-5shtj\") pod \"cinder-operator-controller-manager-8d874c8fc-ts8vb\" (UID: \"0e209e55-35cd-418f-902b-c16a5992677e\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-ts8vb" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.835238 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-ndfws"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.835483 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-ts8vb" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.836548 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srlgr\" (UniqueName: \"kubernetes.io/projected/cc234403-1bdb-40c8-a931-62b193347ae7-kube-api-access-srlgr\") pod \"glance-operator-controller-manager-8886f4c47-9jq6d\" (UID: \"cc234403-1bdb-40c8-a931-62b193347ae7\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-9jq6d" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.842993 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-jh2sx"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.843989 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-jh2sx" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.845683 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-dm2zz" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.852822 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-m2vf4" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.861725 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-585r5\" (UniqueName: \"kubernetes.io/projected/30794b6d-3a42-4d85-bdb3-adaf55b73301-kube-api-access-585r5\") pod \"manila-operator-controller-manager-7dd968899f-2xj4n\" (UID: \"30794b6d-3a42-4d85-bdb3-adaf55b73301\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2xj4n" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.861773 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s54v\" (UniqueName: \"kubernetes.io/projected/697fa7c8-fb2d-411e-ad98-d7240bde28ae-kube-api-access-9s54v\") pod \"keystone-operator-controller-manager-84f48565d4-6f8mt\" (UID: \"697fa7c8-fb2d-411e-ad98-d7240bde28ae\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-6f8mt" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.861809 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxtqk\" (UniqueName: \"kubernetes.io/projected/7e8608e6-cd83-4feb-ba63-261fc1a78437-kube-api-access-dxtqk\") pod \"mariadb-operator-controller-manager-67bf948998-bw6fj\" (UID: \"7e8608e6-cd83-4feb-ba63-261fc1a78437\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bw6fj" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.862083 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b61aade3-b2b3-4a5f-9862-a2018e56ea03-cert\") pod \"infra-operator-controller-manager-79955696d6-cphc8\" (UID: \"b61aade3-b2b3-4a5f-9862-a2018e56ea03\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-cphc8" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.862132 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb9rg\" (UniqueName: \"kubernetes.io/projected/3ad669b6-5937-4a7a-9d0b-b54da1542c6f-kube-api-access-xb9rg\") pod \"horizon-operator-controller-manager-5fb775575f-99z75\" (UID: \"3ad669b6-5937-4a7a-9d0b-b54da1542c6f\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-99z75" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.862157 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k424j\" (UniqueName: \"kubernetes.io/projected/b61aade3-b2b3-4a5f-9862-a2018e56ea03-kube-api-access-k424j\") pod \"infra-operator-controller-manager-79955696d6-cphc8\" (UID: \"b61aade3-b2b3-4a5f-9862-a2018e56ea03\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-cphc8" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.862176 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9nfb\" (UniqueName: \"kubernetes.io/projected/f25860b5-436a-486b-9d7d-065f19ac7f68-kube-api-access-d9nfb\") pod \"heat-operator-controller-manager-69d6db494d-7zjp6\" (UID: \"f25860b5-436a-486b-9d7d-065f19ac7f68\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7zjp6" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.862244 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kvzx\" (UniqueName: \"kubernetes.io/projected/43da80bd-2db6-4ee2-becb-fb97aa4e41bf-kube-api-access-7kvzx\") pod \"ironic-operator-controller-manager-5f4b8bd54d-vbqw7\" (UID: \"43da80bd-2db6-4ee2-becb-fb97aa4e41bf\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vbqw7" Feb 02 13:15:39 crc kubenswrapper[4955]: E0202 13:15:39.863545 4955 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 13:15:39 crc kubenswrapper[4955]: E0202 13:15:39.863605 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b61aade3-b2b3-4a5f-9862-a2018e56ea03-cert podName:b61aade3-b2b3-4a5f-9862-a2018e56ea03 nodeName:}" failed. No retries permitted until 2026-02-02 13:15:40.363589928 +0000 UTC m=+791.275926378 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b61aade3-b2b3-4a5f-9862-a2018e56ea03-cert") pod "infra-operator-controller-manager-79955696d6-cphc8" (UID: "b61aade3-b2b3-4a5f-9862-a2018e56ea03") : secret "infra-operator-webhook-server-cert" not found Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.881873 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-9jq6d" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.889644 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-jh2sx"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.892783 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-nmvcl"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.894096 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-nmvcl" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.896285 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9nfb\" (UniqueName: \"kubernetes.io/projected/f25860b5-436a-486b-9d7d-065f19ac7f68-kube-api-access-d9nfb\") pod \"heat-operator-controller-manager-69d6db494d-7zjp6\" (UID: \"f25860b5-436a-486b-9d7d-065f19ac7f68\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7zjp6" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.896857 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-fgg42" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.897494 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb9rg\" (UniqueName: \"kubernetes.io/projected/3ad669b6-5937-4a7a-9d0b-b54da1542c6f-kube-api-access-xb9rg\") pod \"horizon-operator-controller-manager-5fb775575f-99z75\" (UID: \"3ad669b6-5937-4a7a-9d0b-b54da1542c6f\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-99z75" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.898663 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k424j\" (UniqueName: \"kubernetes.io/projected/b61aade3-b2b3-4a5f-9862-a2018e56ea03-kube-api-access-k424j\") pod \"infra-operator-controller-manager-79955696d6-cphc8\" (UID: \"b61aade3-b2b3-4a5f-9862-a2018e56ea03\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-cphc8" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.899221 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kvzx\" (UniqueName: \"kubernetes.io/projected/43da80bd-2db6-4ee2-becb-fb97aa4e41bf-kube-api-access-7kvzx\") pod \"ironic-operator-controller-manager-5f4b8bd54d-vbqw7\" (UID: \"43da80bd-2db6-4ee2-becb-fb97aa4e41bf\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vbqw7" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.912715 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-nmvcl"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.913080 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7zjp6" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.924525 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-29k76"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.925374 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-29k76" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.927263 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-wznv8" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.951409 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-99z75" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.955657 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-29k76"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.967905 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-585r5\" (UniqueName: \"kubernetes.io/projected/30794b6d-3a42-4d85-bdb3-adaf55b73301-kube-api-access-585r5\") pod \"manila-operator-controller-manager-7dd968899f-2xj4n\" (UID: \"30794b6d-3a42-4d85-bdb3-adaf55b73301\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2xj4n" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.967956 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s54v\" (UniqueName: \"kubernetes.io/projected/697fa7c8-fb2d-411e-ad98-d7240bde28ae-kube-api-access-9s54v\") pod \"keystone-operator-controller-manager-84f48565d4-6f8mt\" (UID: \"697fa7c8-fb2d-411e-ad98-d7240bde28ae\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-6f8mt" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.967991 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88dnt\" (UniqueName: \"kubernetes.io/projected/74a1c10d-25f8-4436-9762-ddcb86e6bb5e-kube-api-access-88dnt\") pod \"nova-operator-controller-manager-55bff696bd-jh2sx\" (UID: \"74a1c10d-25f8-4436-9762-ddcb86e6bb5e\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-jh2sx" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.968020 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6dl7\" (UniqueName: \"kubernetes.io/projected/fb69ed7c-575f-4abd-8bbd-a5a884e9333b-kube-api-access-t6dl7\") pod \"octavia-operator-controller-manager-6687f8d877-nmvcl\" (UID: \"fb69ed7c-575f-4abd-8bbd-a5a884e9333b\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-nmvcl" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.968044 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxtqk\" (UniqueName: \"kubernetes.io/projected/7e8608e6-cd83-4feb-ba63-261fc1a78437-kube-api-access-dxtqk\") pod \"mariadb-operator-controller-manager-67bf948998-bw6fj\" (UID: \"7e8608e6-cd83-4feb-ba63-261fc1a78437\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bw6fj" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.968124 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p544b\" (UniqueName: \"kubernetes.io/projected/212960c6-7b05-4094-ade0-e957cb3b76c8-kube-api-access-p544b\") pod \"neutron-operator-controller-manager-585dbc889-ndfws\" (UID: \"212960c6-7b05-4094-ade0-e957cb3b76c8\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-ndfws" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.985897 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v"] Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.991331 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxtqk\" (UniqueName: \"kubernetes.io/projected/7e8608e6-cd83-4feb-ba63-261fc1a78437-kube-api-access-dxtqk\") pod \"mariadb-operator-controller-manager-67bf948998-bw6fj\" (UID: \"7e8608e6-cd83-4feb-ba63-261fc1a78437\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bw6fj" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.992913 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v" Feb 02 13:15:39 crc kubenswrapper[4955]: I0202 13:15:39.997467 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.009298 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-q8lhq"] Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.011200 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-585r5\" (UniqueName: \"kubernetes.io/projected/30794b6d-3a42-4d85-bdb3-adaf55b73301-kube-api-access-585r5\") pod \"manila-operator-controller-manager-7dd968899f-2xj4n\" (UID: \"30794b6d-3a42-4d85-bdb3-adaf55b73301\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2xj4n" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.011319 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-n9j2m" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.013986 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-q8lhq" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.015103 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s54v\" (UniqueName: \"kubernetes.io/projected/697fa7c8-fb2d-411e-ad98-d7240bde28ae-kube-api-access-9s54v\") pod \"keystone-operator-controller-manager-84f48565d4-6f8mt\" (UID: \"697fa7c8-fb2d-411e-ad98-d7240bde28ae\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-6f8mt" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.017347 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-ccqrr" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.038905 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-q8lhq"] Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.041461 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vbqw7" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.064839 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-6f8mt" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.066026 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v"] Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.068990 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88dnt\" (UniqueName: \"kubernetes.io/projected/74a1c10d-25f8-4436-9762-ddcb86e6bb5e-kube-api-access-88dnt\") pod \"nova-operator-controller-manager-55bff696bd-jh2sx\" (UID: \"74a1c10d-25f8-4436-9762-ddcb86e6bb5e\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-jh2sx" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.069032 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6dl7\" (UniqueName: \"kubernetes.io/projected/fb69ed7c-575f-4abd-8bbd-a5a884e9333b-kube-api-access-t6dl7\") pod \"octavia-operator-controller-manager-6687f8d877-nmvcl\" (UID: \"fb69ed7c-575f-4abd-8bbd-a5a884e9333b\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-nmvcl" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.069069 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k5fp\" (UniqueName: \"kubernetes.io/projected/79b15e2e-40e0-4677-b580-667f81fd3550-kube-api-access-4k5fp\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v\" (UID: \"79b15e2e-40e0-4677-b580-667f81fd3550\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.069138 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/79b15e2e-40e0-4677-b580-667f81fd3550-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v\" (UID: \"79b15e2e-40e0-4677-b580-667f81fd3550\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.069170 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p544b\" (UniqueName: \"kubernetes.io/projected/212960c6-7b05-4094-ade0-e957cb3b76c8-kube-api-access-p544b\") pod \"neutron-operator-controller-manager-585dbc889-ndfws\" (UID: \"212960c6-7b05-4094-ade0-e957cb3b76c8\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-ndfws" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.069209 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j597n\" (UniqueName: \"kubernetes.io/projected/14a7569a-24fd-4aea-828d-ada50de34686-kube-api-access-j597n\") pod \"ovn-operator-controller-manager-788c46999f-29k76\" (UID: \"14a7569a-24fd-4aea-828d-ada50de34686\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-29k76" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.086136 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2xj4n" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.087032 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-hxf5r"] Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.087964 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hxf5r" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.093621 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6dl7\" (UniqueName: \"kubernetes.io/projected/fb69ed7c-575f-4abd-8bbd-a5a884e9333b-kube-api-access-t6dl7\") pod \"octavia-operator-controller-manager-6687f8d877-nmvcl\" (UID: \"fb69ed7c-575f-4abd-8bbd-a5a884e9333b\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-nmvcl" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.095359 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-msc7v" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.099510 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88dnt\" (UniqueName: \"kubernetes.io/projected/74a1c10d-25f8-4436-9762-ddcb86e6bb5e-kube-api-access-88dnt\") pod \"nova-operator-controller-manager-55bff696bd-jh2sx\" (UID: \"74a1c10d-25f8-4436-9762-ddcb86e6bb5e\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-jh2sx" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.101258 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-hxf5r"] Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.118765 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p544b\" (UniqueName: \"kubernetes.io/projected/212960c6-7b05-4094-ade0-e957cb3b76c8-kube-api-access-p544b\") pod \"neutron-operator-controller-manager-585dbc889-ndfws\" (UID: \"212960c6-7b05-4094-ade0-e957cb3b76c8\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-ndfws" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.138644 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx"] Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.141335 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.143515 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-6fqtz" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.172050 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx"] Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.172528 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k5fp\" (UniqueName: \"kubernetes.io/projected/79b15e2e-40e0-4677-b580-667f81fd3550-kube-api-access-4k5fp\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v\" (UID: \"79b15e2e-40e0-4677-b580-667f81fd3550\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.172671 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/79b15e2e-40e0-4677-b580-667f81fd3550-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v\" (UID: \"79b15e2e-40e0-4677-b580-667f81fd3550\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.172957 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bw6fj" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.173083 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52wnr\" (UniqueName: \"kubernetes.io/projected/937de608-a64a-40ab-8a80-90800c18cf8f-kube-api-access-52wnr\") pod \"placement-operator-controller-manager-5b964cf4cd-q8lhq\" (UID: \"937de608-a64a-40ab-8a80-90800c18cf8f\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-q8lhq" Feb 02 13:15:40 crc kubenswrapper[4955]: E0202 13:15:40.173177 4955 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 13:15:40 crc kubenswrapper[4955]: E0202 13:15:40.173217 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79b15e2e-40e0-4677-b580-667f81fd3550-cert podName:79b15e2e-40e0-4677-b580-667f81fd3550 nodeName:}" failed. No retries permitted until 2026-02-02 13:15:40.673201796 +0000 UTC m=+791.585538236 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/79b15e2e-40e0-4677-b580-667f81fd3550-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v" (UID: "79b15e2e-40e0-4677-b580-667f81fd3550") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.173513 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j597n\" (UniqueName: \"kubernetes.io/projected/14a7569a-24fd-4aea-828d-ada50de34686-kube-api-access-j597n\") pod \"ovn-operator-controller-manager-788c46999f-29k76\" (UID: \"14a7569a-24fd-4aea-828d-ada50de34686\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-29k76" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.173616 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptrh9\" (UniqueName: \"kubernetes.io/projected/ae3aa85d-803e-42ca-aff0-b2f5cb660355-kube-api-access-ptrh9\") pod \"swift-operator-controller-manager-68fc8c869-hxf5r\" (UID: \"ae3aa85d-803e-42ca-aff0-b2f5cb660355\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hxf5r" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.201731 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-pwrfk"] Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.202060 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j597n\" (UniqueName: \"kubernetes.io/projected/14a7569a-24fd-4aea-828d-ada50de34686-kube-api-access-j597n\") pod \"ovn-operator-controller-manager-788c46999f-29k76\" (UID: \"14a7569a-24fd-4aea-828d-ada50de34686\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-29k76" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.208524 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-pwrfk"] Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.208633 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pwrfk" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.215186 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-86cz9" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.222094 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-ndfws" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.242029 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-bnxrv"] Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.244204 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-jh2sx" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.245583 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-bnxrv" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.250986 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k5fp\" (UniqueName: \"kubernetes.io/projected/79b15e2e-40e0-4677-b580-667f81fd3550-kube-api-access-4k5fp\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v\" (UID: \"79b15e2e-40e0-4677-b580-667f81fd3550\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.259526 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-bnxrv"] Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.259544 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-jvdcf" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.275644 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-nmvcl" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.277092 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptrh9\" (UniqueName: \"kubernetes.io/projected/ae3aa85d-803e-42ca-aff0-b2f5cb660355-kube-api-access-ptrh9\") pod \"swift-operator-controller-manager-68fc8c869-hxf5r\" (UID: \"ae3aa85d-803e-42ca-aff0-b2f5cb660355\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hxf5r" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.277275 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29s9x\" (UniqueName: \"kubernetes.io/projected/0e5f1bee-07dd-4eaf-9a3b-328845abb141-kube-api-access-29s9x\") pod \"telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx\" (UID: \"0e5f1bee-07dd-4eaf-9a3b-328845abb141\") " pod="openstack-operators/telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.277353 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52wnr\" (UniqueName: \"kubernetes.io/projected/937de608-a64a-40ab-8a80-90800c18cf8f-kube-api-access-52wnr\") pod \"placement-operator-controller-manager-5b964cf4cd-q8lhq\" (UID: \"937de608-a64a-40ab-8a80-90800c18cf8f\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-q8lhq" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.305818 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-29k76" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.317586 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c"] Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.318667 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.322238 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-26jdw" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.322309 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.322386 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.341370 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c"] Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.357222 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptrh9\" (UniqueName: \"kubernetes.io/projected/ae3aa85d-803e-42ca-aff0-b2f5cb660355-kube-api-access-ptrh9\") pod \"swift-operator-controller-manager-68fc8c869-hxf5r\" (UID: \"ae3aa85d-803e-42ca-aff0-b2f5cb660355\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hxf5r" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.357350 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52wnr\" (UniqueName: \"kubernetes.io/projected/937de608-a64a-40ab-8a80-90800c18cf8f-kube-api-access-52wnr\") pod \"placement-operator-controller-manager-5b964cf4cd-q8lhq\" (UID: \"937de608-a64a-40ab-8a80-90800c18cf8f\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-q8lhq" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.365527 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-q8lhq" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.368674 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-npj7b"] Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.369644 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-npj7b" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.373403 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-w64d2" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.378653 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29s9x\" (UniqueName: \"kubernetes.io/projected/0e5f1bee-07dd-4eaf-9a3b-328845abb141-kube-api-access-29s9x\") pod \"telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx\" (UID: \"0e5f1bee-07dd-4eaf-9a3b-328845abb141\") " pod="openstack-operators/telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.378699 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j6ng\" (UniqueName: \"kubernetes.io/projected/4e156b3e-40e8-4ade-af5b-10f05949e12b-kube-api-access-7j6ng\") pod \"test-operator-controller-manager-56f8bfcd9f-pwrfk\" (UID: \"4e156b3e-40e8-4ade-af5b-10f05949e12b\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pwrfk" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.379002 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l25kl\" (UniqueName: \"kubernetes.io/projected/d0b0d88d-985e-4a4c-9f57-5ef95f00b9ac-kube-api-access-l25kl\") pod \"watcher-operator-controller-manager-564965969-bnxrv\" (UID: \"d0b0d88d-985e-4a4c-9f57-5ef95f00b9ac\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-bnxrv" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.379145 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b61aade3-b2b3-4a5f-9862-a2018e56ea03-cert\") pod \"infra-operator-controller-manager-79955696d6-cphc8\" (UID: \"b61aade3-b2b3-4a5f-9862-a2018e56ea03\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-cphc8" Feb 02 13:15:40 crc kubenswrapper[4955]: E0202 13:15:40.379291 4955 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 13:15:40 crc kubenswrapper[4955]: E0202 13:15:40.379348 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b61aade3-b2b3-4a5f-9862-a2018e56ea03-cert podName:b61aade3-b2b3-4a5f-9862-a2018e56ea03 nodeName:}" failed. No retries permitted until 2026-02-02 13:15:41.379330461 +0000 UTC m=+792.291666911 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b61aade3-b2b3-4a5f-9862-a2018e56ea03-cert") pod "infra-operator-controller-manager-79955696d6-cphc8" (UID: "b61aade3-b2b3-4a5f-9862-a2018e56ea03") : secret "infra-operator-webhook-server-cert" not found Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.392252 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-npj7b"] Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.403709 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29s9x\" (UniqueName: \"kubernetes.io/projected/0e5f1bee-07dd-4eaf-9a3b-328845abb141-kube-api-access-29s9x\") pod \"telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx\" (UID: \"0e5f1bee-07dd-4eaf-9a3b-328845abb141\") " pod="openstack-operators/telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.411422 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sd58m"] Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.434220 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hxf5r" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.480724 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djhqd\" (UniqueName: \"kubernetes.io/projected/f231eb80-56bc-48bd-b412-a4247d11317f-kube-api-access-djhqd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-npj7b\" (UID: \"f231eb80-56bc-48bd-b412-a4247d11317f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-npj7b" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.480765 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l25kl\" (UniqueName: \"kubernetes.io/projected/d0b0d88d-985e-4a4c-9f57-5ef95f00b9ac-kube-api-access-l25kl\") pod \"watcher-operator-controller-manager-564965969-bnxrv\" (UID: \"d0b0d88d-985e-4a4c-9f57-5ef95f00b9ac\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-bnxrv" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.480791 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-webhook-certs\") pod \"openstack-operator-controller-manager-5fb457d788-z4g9c\" (UID: \"6ad0132c-7b8d-4342-8668-23e66e695a6e\") " pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.480848 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j6ng\" (UniqueName: \"kubernetes.io/projected/4e156b3e-40e8-4ade-af5b-10f05949e12b-kube-api-access-7j6ng\") pod \"test-operator-controller-manager-56f8bfcd9f-pwrfk\" (UID: \"4e156b3e-40e8-4ade-af5b-10f05949e12b\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pwrfk" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.480880 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-metrics-certs\") pod \"openstack-operator-controller-manager-5fb457d788-z4g9c\" (UID: \"6ad0132c-7b8d-4342-8668-23e66e695a6e\") " pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.480933 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckj8c\" (UniqueName: \"kubernetes.io/projected/6ad0132c-7b8d-4342-8668-23e66e695a6e-kube-api-access-ckj8c\") pod \"openstack-operator-controller-manager-5fb457d788-z4g9c\" (UID: \"6ad0132c-7b8d-4342-8668-23e66e695a6e\") " pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.500359 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.509900 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l25kl\" (UniqueName: \"kubernetes.io/projected/d0b0d88d-985e-4a4c-9f57-5ef95f00b9ac-kube-api-access-l25kl\") pod \"watcher-operator-controller-manager-564965969-bnxrv\" (UID: \"d0b0d88d-985e-4a4c-9f57-5ef95f00b9ac\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-bnxrv" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.510698 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j6ng\" (UniqueName: \"kubernetes.io/projected/4e156b3e-40e8-4ade-af5b-10f05949e12b-kube-api-access-7j6ng\") pod \"test-operator-controller-manager-56f8bfcd9f-pwrfk\" (UID: \"4e156b3e-40e8-4ade-af5b-10f05949e12b\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pwrfk" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.581967 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckj8c\" (UniqueName: \"kubernetes.io/projected/6ad0132c-7b8d-4342-8668-23e66e695a6e-kube-api-access-ckj8c\") pod \"openstack-operator-controller-manager-5fb457d788-z4g9c\" (UID: \"6ad0132c-7b8d-4342-8668-23e66e695a6e\") " pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.582307 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djhqd\" (UniqueName: \"kubernetes.io/projected/f231eb80-56bc-48bd-b412-a4247d11317f-kube-api-access-djhqd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-npj7b\" (UID: \"f231eb80-56bc-48bd-b412-a4247d11317f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-npj7b" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.582343 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-webhook-certs\") pod \"openstack-operator-controller-manager-5fb457d788-z4g9c\" (UID: \"6ad0132c-7b8d-4342-8668-23e66e695a6e\") " pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.582415 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-metrics-certs\") pod \"openstack-operator-controller-manager-5fb457d788-z4g9c\" (UID: \"6ad0132c-7b8d-4342-8668-23e66e695a6e\") " pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:15:40 crc kubenswrapper[4955]: E0202 13:15:40.582545 4955 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 13:15:40 crc kubenswrapper[4955]: E0202 13:15:40.586677 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-metrics-certs podName:6ad0132c-7b8d-4342-8668-23e66e695a6e nodeName:}" failed. No retries permitted until 2026-02-02 13:15:41.086633014 +0000 UTC m=+791.998969464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-metrics-certs") pod "openstack-operator-controller-manager-5fb457d788-z4g9c" (UID: "6ad0132c-7b8d-4342-8668-23e66e695a6e") : secret "metrics-server-cert" not found Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.587268 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pwrfk" Feb 02 13:15:40 crc kubenswrapper[4955]: E0202 13:15:40.588161 4955 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 13:15:40 crc kubenswrapper[4955]: E0202 13:15:40.588202 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-webhook-certs podName:6ad0132c-7b8d-4342-8668-23e66e695a6e nodeName:}" failed. No retries permitted until 2026-02-02 13:15:41.088189321 +0000 UTC m=+792.000525771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-webhook-certs") pod "openstack-operator-controller-manager-5fb457d788-z4g9c" (UID: "6ad0132c-7b8d-4342-8668-23e66e695a6e") : secret "webhook-server-cert" not found Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.603757 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-m2vf4"] Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.616274 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djhqd\" (UniqueName: \"kubernetes.io/projected/f231eb80-56bc-48bd-b412-a4247d11317f-kube-api-access-djhqd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-npj7b\" (UID: \"f231eb80-56bc-48bd-b412-a4247d11317f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-npj7b" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.619588 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckj8c\" (UniqueName: \"kubernetes.io/projected/6ad0132c-7b8d-4342-8668-23e66e695a6e-kube-api-access-ckj8c\") pod \"openstack-operator-controller-manager-5fb457d788-z4g9c\" (UID: \"6ad0132c-7b8d-4342-8668-23e66e695a6e\") " pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.647198 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-ts8vb"] Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.684614 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/79b15e2e-40e0-4677-b580-667f81fd3550-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v\" (UID: \"79b15e2e-40e0-4677-b580-667f81fd3550\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v" Feb 02 13:15:40 crc kubenswrapper[4955]: E0202 13:15:40.684800 4955 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 13:15:40 crc kubenswrapper[4955]: E0202 13:15:40.684873 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79b15e2e-40e0-4677-b580-667f81fd3550-cert podName:79b15e2e-40e0-4677-b580-667f81fd3550 nodeName:}" failed. No retries permitted until 2026-02-02 13:15:41.684855049 +0000 UTC m=+792.597191499 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/79b15e2e-40e0-4677-b580-667f81fd3550-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v" (UID: "79b15e2e-40e0-4677-b580-667f81fd3550") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.698728 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-bnxrv" Feb 02 13:15:40 crc kubenswrapper[4955]: I0202 13:15:40.720002 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-npj7b" Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.001045 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-6f8mt"] Feb 02 13:15:41 crc kubenswrapper[4955]: W0202 13:15:41.009783 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod697fa7c8_fb2d_411e_ad98_d7240bde28ae.slice/crio-29e659e32c24534fb94b22d017b1ab015f7d43719857d426f2bc9fa961858116 WatchSource:0}: Error finding container 29e659e32c24534fb94b22d017b1ab015f7d43719857d426f2bc9fa961858116: Status 404 returned error can't find the container with id 29e659e32c24534fb94b22d017b1ab015f7d43719857d426f2bc9fa961858116 Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.017487 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-9jq6d"] Feb 02 13:15:41 crc kubenswrapper[4955]: W0202 13:15:41.034965 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc234403_1bdb_40c8_a931_62b193347ae7.slice/crio-f6dc6be4919e720512bcd71eeda66fa536c0ae6077730feba0f8a52bdbb334f3 WatchSource:0}: Error finding container f6dc6be4919e720512bcd71eeda66fa536c0ae6077730feba0f8a52bdbb334f3: Status 404 returned error can't find the container with id f6dc6be4919e720512bcd71eeda66fa536c0ae6077730feba0f8a52bdbb334f3 Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.037393 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-7zjp6"] Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.045493 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-99z75"] Feb 02 13:15:41 crc kubenswrapper[4955]: W0202 13:15:41.050358 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf25860b5_436a_486b_9d7d_065f19ac7f68.slice/crio-8a064a1ceba097df59fffd978006269e021167cf3eafcd74f154db6eb40a1e44 WatchSource:0}: Error finding container 8a064a1ceba097df59fffd978006269e021167cf3eafcd74f154db6eb40a1e44: Status 404 returned error can't find the container with id 8a064a1ceba097df59fffd978006269e021167cf3eafcd74f154db6eb40a1e44 Feb 02 13:15:41 crc kubenswrapper[4955]: W0202 13:15:41.052709 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ad669b6_5937_4a7a_9d0b_b54da1542c6f.slice/crio-360a5a058ac69dd86bd4386d79cd802ab06fa1c19f5049ae5d20dee4ae9eb41a WatchSource:0}: Error finding container 360a5a058ac69dd86bd4386d79cd802ab06fa1c19f5049ae5d20dee4ae9eb41a: Status 404 returned error can't find the container with id 360a5a058ac69dd86bd4386d79cd802ab06fa1c19f5049ae5d20dee4ae9eb41a Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.059643 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-2xj4n"] Feb 02 13:15:41 crc kubenswrapper[4955]: W0202 13:15:41.065072 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30794b6d_3a42_4d85_bdb3_adaf55b73301.slice/crio-d38edf1bd2a2e16297df1a5f12f0b8bb4ba5ccfebc0c5155997ff634840ec58a WatchSource:0}: Error finding container d38edf1bd2a2e16297df1a5f12f0b8bb4ba5ccfebc0c5155997ff634840ec58a: Status 404 returned error can't find the container with id d38edf1bd2a2e16297df1a5f12f0b8bb4ba5ccfebc0c5155997ff634840ec58a Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.065920 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vbqw7"] Feb 02 13:15:41 crc kubenswrapper[4955]: W0202 13:15:41.065947 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43da80bd_2db6_4ee2_becb_fb97aa4e41bf.slice/crio-da6281a2c6af20d3a76c361a09b6f0c55085a4ae0a6a56180877de2f666c6793 WatchSource:0}: Error finding container da6281a2c6af20d3a76c361a09b6f0c55085a4ae0a6a56180877de2f666c6793: Status 404 returned error can't find the container with id da6281a2c6af20d3a76c361a09b6f0c55085a4ae0a6a56180877de2f666c6793 Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.092936 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-metrics-certs\") pod \"openstack-operator-controller-manager-5fb457d788-z4g9c\" (UID: \"6ad0132c-7b8d-4342-8668-23e66e695a6e\") " pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.093030 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-webhook-certs\") pod \"openstack-operator-controller-manager-5fb457d788-z4g9c\" (UID: \"6ad0132c-7b8d-4342-8668-23e66e695a6e\") " pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:15:41 crc kubenswrapper[4955]: E0202 13:15:41.093087 4955 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 13:15:41 crc kubenswrapper[4955]: E0202 13:15:41.093169 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-metrics-certs podName:6ad0132c-7b8d-4342-8668-23e66e695a6e nodeName:}" failed. No retries permitted until 2026-02-02 13:15:42.093152575 +0000 UTC m=+793.005489025 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-metrics-certs") pod "openstack-operator-controller-manager-5fb457d788-z4g9c" (UID: "6ad0132c-7b8d-4342-8668-23e66e695a6e") : secret "metrics-server-cert" not found Feb 02 13:15:41 crc kubenswrapper[4955]: E0202 13:15:41.093173 4955 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 13:15:41 crc kubenswrapper[4955]: E0202 13:15:41.093251 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-webhook-certs podName:6ad0132c-7b8d-4342-8668-23e66e695a6e nodeName:}" failed. No retries permitted until 2026-02-02 13:15:42.093227276 +0000 UTC m=+793.005563726 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-webhook-certs") pod "openstack-operator-controller-manager-5fb457d788-z4g9c" (UID: "6ad0132c-7b8d-4342-8668-23e66e695a6e") : secret "webhook-server-cert" not found Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.154418 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-6f8mt" event={"ID":"697fa7c8-fb2d-411e-ad98-d7240bde28ae","Type":"ContainerStarted","Data":"29e659e32c24534fb94b22d017b1ab015f7d43719857d426f2bc9fa961858116"} Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.156632 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7zjp6" event={"ID":"f25860b5-436a-486b-9d7d-065f19ac7f68","Type":"ContainerStarted","Data":"8a064a1ceba097df59fffd978006269e021167cf3eafcd74f154db6eb40a1e44"} Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.157819 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2xj4n" event={"ID":"30794b6d-3a42-4d85-bdb3-adaf55b73301","Type":"ContainerStarted","Data":"d38edf1bd2a2e16297df1a5f12f0b8bb4ba5ccfebc0c5155997ff634840ec58a"} Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.158996 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-m2vf4" event={"ID":"acac6a68-fe33-41eb-8f49-0fd47cc4f0d4","Type":"ContainerStarted","Data":"a3b8625f66c0b9d23d08d9a5a668d67ceac832894ce0c578dc80906d489198b8"} Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.159975 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-9jq6d" event={"ID":"cc234403-1bdb-40c8-a931-62b193347ae7","Type":"ContainerStarted","Data":"f6dc6be4919e720512bcd71eeda66fa536c0ae6077730feba0f8a52bdbb334f3"} Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.162844 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vbqw7" event={"ID":"43da80bd-2db6-4ee2-becb-fb97aa4e41bf","Type":"ContainerStarted","Data":"da6281a2c6af20d3a76c361a09b6f0c55085a4ae0a6a56180877de2f666c6793"} Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.165959 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-ts8vb" event={"ID":"0e209e55-35cd-418f-902b-c16a5992677e","Type":"ContainerStarted","Data":"545bac4260b19dbad3b4b572738a92f62f764424417afcf41d66a0b684cac475"} Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.167517 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sd58m" event={"ID":"7b0df3b7-68cf-4cb0-94b0-69b394da89c5","Type":"ContainerStarted","Data":"142846aedd8511bcce86bb25ee33a431ea3eb09eff9d4e0beda1716b3d2ef361"} Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.173241 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-99z75" event={"ID":"3ad669b6-5937-4a7a-9d0b-b54da1542c6f","Type":"ContainerStarted","Data":"360a5a058ac69dd86bd4386d79cd802ab06fa1c19f5049ae5d20dee4ae9eb41a"} Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.176809 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-q8lhq"] Feb 02 13:15:41 crc kubenswrapper[4955]: W0202 13:15:41.207515 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae3aa85d_803e_42ca_aff0_b2f5cb660355.slice/crio-6161d3e89861414273ae3cf41c3aebfa7662e1a9868334dd7a1383af5c31756d WatchSource:0}: Error finding container 6161d3e89861414273ae3cf41c3aebfa7662e1a9868334dd7a1383af5c31756d: Status 404 returned error can't find the container with id 6161d3e89861414273ae3cf41c3aebfa7662e1a9868334dd7a1383af5c31756d Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.208102 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-bw6fj"] Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.221549 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-nmvcl"] Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.229426 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-jh2sx"] Feb 02 13:15:41 crc kubenswrapper[4955]: W0202 13:15:41.231598 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod212960c6_7b05_4094_ade0_e957cb3b76c8.slice/crio-116329c0facdef16d11576150c7da4aea47bf315cdc883539ae3653bb546f24d WatchSource:0}: Error finding container 116329c0facdef16d11576150c7da4aea47bf315cdc883539ae3653bb546f24d: Status 404 returned error can't find the container with id 116329c0facdef16d11576150c7da4aea47bf315cdc883539ae3653bb546f24d Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.233407 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-hxf5r"] Feb 02 13:15:41 crc kubenswrapper[4955]: E0202 13:15:41.242081 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p544b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-ndfws_openstack-operators(212960c6-7b05-4094-ade0-e957cb3b76c8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.242686 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-ndfws"] Feb 02 13:15:41 crc kubenswrapper[4955]: E0202 13:15:41.244096 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-ndfws" podUID="212960c6-7b05-4094-ade0-e957cb3b76c8" Feb 02 13:15:41 crc kubenswrapper[4955]: E0202 13:15:41.250733 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ptrh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-hxf5r_openstack-operators(ae3aa85d-803e-42ca-aff0-b2f5cb660355): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 13:15:41 crc kubenswrapper[4955]: E0202 13:15:41.252009 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hxf5r" podUID="ae3aa85d-803e-42ca-aff0-b2f5cb660355" Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.393216 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-pwrfk"] Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.396687 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b61aade3-b2b3-4a5f-9862-a2018e56ea03-cert\") pod \"infra-operator-controller-manager-79955696d6-cphc8\" (UID: \"b61aade3-b2b3-4a5f-9862-a2018e56ea03\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-cphc8" Feb 02 13:15:41 crc kubenswrapper[4955]: E0202 13:15:41.396856 4955 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 13:15:41 crc kubenswrapper[4955]: E0202 13:15:41.396933 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b61aade3-b2b3-4a5f-9862-a2018e56ea03-cert podName:b61aade3-b2b3-4a5f-9862-a2018e56ea03 nodeName:}" failed. No retries permitted until 2026-02-02 13:15:43.396910751 +0000 UTC m=+794.309247201 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b61aade3-b2b3-4a5f-9862-a2018e56ea03-cert") pod "infra-operator-controller-manager-79955696d6-cphc8" (UID: "b61aade3-b2b3-4a5f-9862-a2018e56ea03") : secret "infra-operator-webhook-server-cert" not found Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.400924 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx"] Feb 02 13:15:41 crc kubenswrapper[4955]: W0202 13:15:41.403408 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e156b3e_40e8_4ade_af5b_10f05949e12b.slice/crio-1cb956f7a7ba21b25fa40b7547feab2c9809a1e388dbf95ba81bbee4fe4b1a91 WatchSource:0}: Error finding container 1cb956f7a7ba21b25fa40b7547feab2c9809a1e388dbf95ba81bbee4fe4b1a91: Status 404 returned error can't find the container with id 1cb956f7a7ba21b25fa40b7547feab2c9809a1e388dbf95ba81bbee4fe4b1a91 Feb 02 13:15:41 crc kubenswrapper[4955]: W0202 13:15:41.405136 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e5f1bee_07dd_4eaf_9a3b_328845abb141.slice/crio-62464380c5d82e1bebca13a1a590572f558d33bd16e1010d67c1a5218874e18d WatchSource:0}: Error finding container 62464380c5d82e1bebca13a1a590572f558d33bd16e1010d67c1a5218874e18d: Status 404 returned error can't find the container with id 62464380c5d82e1bebca13a1a590572f558d33bd16e1010d67c1a5218874e18d Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.405287 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-29k76"] Feb 02 13:15:41 crc kubenswrapper[4955]: W0202 13:15:41.406890 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14a7569a_24fd_4aea_828d_ada50de34686.slice/crio-ee3b1a3c711e06615d7d9761c3a7c0652a8698b678bc7c66878ea888993b1aaa WatchSource:0}: Error finding container ee3b1a3c711e06615d7d9761c3a7c0652a8698b678bc7c66878ea888993b1aaa: Status 404 returned error can't find the container with id ee3b1a3c711e06615d7d9761c3a7c0652a8698b678bc7c66878ea888993b1aaa Feb 02 13:15:41 crc kubenswrapper[4955]: E0202 13:15:41.407971 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.41:5001/openstack-k8s-operators/telemetry-operator:8ee44d3a4e1228f73b5ef3371267701b0df1ce5f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-29s9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx_openstack-operators(0e5f1bee-07dd-4eaf-9a3b-328845abb141): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 13:15:41 crc kubenswrapper[4955]: E0202 13:15:41.409261 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx" podUID="0e5f1bee-07dd-4eaf-9a3b-328845abb141" Feb 02 13:15:41 crc kubenswrapper[4955]: E0202 13:15:41.411957 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j597n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-29k76_openstack-operators(14a7569a-24fd-4aea-828d-ada50de34686): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 13:15:41 crc kubenswrapper[4955]: E0202 13:15:41.413177 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-29k76" podUID="14a7569a-24fd-4aea-828d-ada50de34686" Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.490448 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-npj7b"] Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.494953 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-bnxrv"] Feb 02 13:15:41 crc kubenswrapper[4955]: W0202 13:15:41.496926 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0b0d88d_985e_4a4c_9f57_5ef95f00b9ac.slice/crio-7780b39b7ba21c5b90066b4377b736f747cda00591021aabb7145d654c4ac6ca WatchSource:0}: Error finding container 7780b39b7ba21c5b90066b4377b736f747cda00591021aabb7145d654c4ac6ca: Status 404 returned error can't find the container with id 7780b39b7ba21c5b90066b4377b736f747cda00591021aabb7145d654c4ac6ca Feb 02 13:15:41 crc kubenswrapper[4955]: W0202 13:15:41.501045 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf231eb80_56bc_48bd_b412_a4247d11317f.slice/crio-0c4989abb1255b6e8d4f7aed11fb2a897e3a35b73bbd7e925b7279086df89bb5 WatchSource:0}: Error finding container 0c4989abb1255b6e8d4f7aed11fb2a897e3a35b73bbd7e925b7279086df89bb5: Status 404 returned error can't find the container with id 0c4989abb1255b6e8d4f7aed11fb2a897e3a35b73bbd7e925b7279086df89bb5 Feb 02 13:15:41 crc kubenswrapper[4955]: E0202 13:15:41.504954 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-djhqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-npj7b_openstack-operators(f231eb80-56bc-48bd-b412-a4247d11317f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 13:15:41 crc kubenswrapper[4955]: E0202 13:15:41.506070 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-npj7b" podUID="f231eb80-56bc-48bd-b412-a4247d11317f" Feb 02 13:15:41 crc kubenswrapper[4955]: I0202 13:15:41.704485 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/79b15e2e-40e0-4677-b580-667f81fd3550-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v\" (UID: \"79b15e2e-40e0-4677-b580-667f81fd3550\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v" Feb 02 13:15:41 crc kubenswrapper[4955]: E0202 13:15:41.704708 4955 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 13:15:41 crc kubenswrapper[4955]: E0202 13:15:41.706304 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79b15e2e-40e0-4677-b580-667f81fd3550-cert podName:79b15e2e-40e0-4677-b580-667f81fd3550 nodeName:}" failed. No retries permitted until 2026-02-02 13:15:43.706282243 +0000 UTC m=+794.618618693 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/79b15e2e-40e0-4677-b580-667f81fd3550-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v" (UID: "79b15e2e-40e0-4677-b580-667f81fd3550") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 13:15:42 crc kubenswrapper[4955]: I0202 13:15:42.109939 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-webhook-certs\") pod \"openstack-operator-controller-manager-5fb457d788-z4g9c\" (UID: \"6ad0132c-7b8d-4342-8668-23e66e695a6e\") " pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:15:42 crc kubenswrapper[4955]: I0202 13:15:42.110054 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-metrics-certs\") pod \"openstack-operator-controller-manager-5fb457d788-z4g9c\" (UID: \"6ad0132c-7b8d-4342-8668-23e66e695a6e\") " pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:15:42 crc kubenswrapper[4955]: E0202 13:15:42.110190 4955 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 13:15:42 crc kubenswrapper[4955]: E0202 13:15:42.110246 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-metrics-certs podName:6ad0132c-7b8d-4342-8668-23e66e695a6e nodeName:}" failed. No retries permitted until 2026-02-02 13:15:44.110227192 +0000 UTC m=+795.022563652 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-metrics-certs") pod "openstack-operator-controller-manager-5fb457d788-z4g9c" (UID: "6ad0132c-7b8d-4342-8668-23e66e695a6e") : secret "metrics-server-cert" not found Feb 02 13:15:42 crc kubenswrapper[4955]: E0202 13:15:42.110610 4955 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 13:15:42 crc kubenswrapper[4955]: E0202 13:15:42.110657 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-webhook-certs podName:6ad0132c-7b8d-4342-8668-23e66e695a6e nodeName:}" failed. No retries permitted until 2026-02-02 13:15:44.110647322 +0000 UTC m=+795.022983772 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-webhook-certs") pod "openstack-operator-controller-manager-5fb457d788-z4g9c" (UID: "6ad0132c-7b8d-4342-8668-23e66e695a6e") : secret "webhook-server-cert" not found Feb 02 13:15:42 crc kubenswrapper[4955]: I0202 13:15:42.183827 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-nmvcl" event={"ID":"fb69ed7c-575f-4abd-8bbd-a5a884e9333b","Type":"ContainerStarted","Data":"0b1cd57c2aa36c8123854650f7bee5f9871e3732580aceb12aacd7cc423ca00b"} Feb 02 13:15:42 crc kubenswrapper[4955]: I0202 13:15:42.186588 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-q8lhq" event={"ID":"937de608-a64a-40ab-8a80-90800c18cf8f","Type":"ContainerStarted","Data":"c4a16c26dd005de5adf3e9052c09f13d948c7432308c3b5981f1968d1d89d931"} Feb 02 13:15:42 crc kubenswrapper[4955]: I0202 13:15:42.198523 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-29k76" event={"ID":"14a7569a-24fd-4aea-828d-ada50de34686","Type":"ContainerStarted","Data":"ee3b1a3c711e06615d7d9761c3a7c0652a8698b678bc7c66878ea888993b1aaa"} Feb 02 13:15:42 crc kubenswrapper[4955]: E0202 13:15:42.200478 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-29k76" podUID="14a7569a-24fd-4aea-828d-ada50de34686" Feb 02 13:15:42 crc kubenswrapper[4955]: I0202 13:15:42.203422 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bw6fj" event={"ID":"7e8608e6-cd83-4feb-ba63-261fc1a78437","Type":"ContainerStarted","Data":"1ec690d0d488c89c9cc7f03ee827de5d57870ac292b157050d70cceb50e875ec"} Feb 02 13:15:42 crc kubenswrapper[4955]: I0202 13:15:42.204687 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pwrfk" event={"ID":"4e156b3e-40e8-4ade-af5b-10f05949e12b","Type":"ContainerStarted","Data":"1cb956f7a7ba21b25fa40b7547feab2c9809a1e388dbf95ba81bbee4fe4b1a91"} Feb 02 13:15:42 crc kubenswrapper[4955]: I0202 13:15:42.207219 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-npj7b" event={"ID":"f231eb80-56bc-48bd-b412-a4247d11317f","Type":"ContainerStarted","Data":"0c4989abb1255b6e8d4f7aed11fb2a897e3a35b73bbd7e925b7279086df89bb5"} Feb 02 13:15:42 crc kubenswrapper[4955]: E0202 13:15:42.208789 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-npj7b" podUID="f231eb80-56bc-48bd-b412-a4247d11317f" Feb 02 13:15:42 crc kubenswrapper[4955]: I0202 13:15:42.209792 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hxf5r" event={"ID":"ae3aa85d-803e-42ca-aff0-b2f5cb660355","Type":"ContainerStarted","Data":"6161d3e89861414273ae3cf41c3aebfa7662e1a9868334dd7a1383af5c31756d"} Feb 02 13:15:42 crc kubenswrapper[4955]: E0202 13:15:42.211075 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hxf5r" podUID="ae3aa85d-803e-42ca-aff0-b2f5cb660355" Feb 02 13:15:42 crc kubenswrapper[4955]: I0202 13:15:42.211520 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-bnxrv" event={"ID":"d0b0d88d-985e-4a4c-9f57-5ef95f00b9ac","Type":"ContainerStarted","Data":"7780b39b7ba21c5b90066b4377b736f747cda00591021aabb7145d654c4ac6ca"} Feb 02 13:15:42 crc kubenswrapper[4955]: I0202 13:15:42.215033 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-ndfws" event={"ID":"212960c6-7b05-4094-ade0-e957cb3b76c8","Type":"ContainerStarted","Data":"116329c0facdef16d11576150c7da4aea47bf315cdc883539ae3653bb546f24d"} Feb 02 13:15:42 crc kubenswrapper[4955]: E0202 13:15:42.226727 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-ndfws" podUID="212960c6-7b05-4094-ade0-e957cb3b76c8" Feb 02 13:15:42 crc kubenswrapper[4955]: I0202 13:15:42.231180 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx" event={"ID":"0e5f1bee-07dd-4eaf-9a3b-328845abb141","Type":"ContainerStarted","Data":"62464380c5d82e1bebca13a1a590572f558d33bd16e1010d67c1a5218874e18d"} Feb 02 13:15:42 crc kubenswrapper[4955]: E0202 13:15:42.232505 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.41:5001/openstack-k8s-operators/telemetry-operator:8ee44d3a4e1228f73b5ef3371267701b0df1ce5f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx" podUID="0e5f1bee-07dd-4eaf-9a3b-328845abb141" Feb 02 13:15:42 crc kubenswrapper[4955]: I0202 13:15:42.234861 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-jh2sx" event={"ID":"74a1c10d-25f8-4436-9762-ddcb86e6bb5e","Type":"ContainerStarted","Data":"2d24f32b7755b9773cd4bc58719a86683c8c5456afe4b5e908c5438827c48afc"} Feb 02 13:15:43 crc kubenswrapper[4955]: E0202 13:15:43.253624 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hxf5r" podUID="ae3aa85d-803e-42ca-aff0-b2f5cb660355" Feb 02 13:15:43 crc kubenswrapper[4955]: E0202 13:15:43.254004 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-npj7b" podUID="f231eb80-56bc-48bd-b412-a4247d11317f" Feb 02 13:15:43 crc kubenswrapper[4955]: E0202 13:15:43.254089 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.41:5001/openstack-k8s-operators/telemetry-operator:8ee44d3a4e1228f73b5ef3371267701b0df1ce5f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx" podUID="0e5f1bee-07dd-4eaf-9a3b-328845abb141" Feb 02 13:15:43 crc kubenswrapper[4955]: E0202 13:15:43.254126 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-29k76" podUID="14a7569a-24fd-4aea-828d-ada50de34686" Feb 02 13:15:43 crc kubenswrapper[4955]: E0202 13:15:43.254157 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-ndfws" podUID="212960c6-7b05-4094-ade0-e957cb3b76c8" Feb 02 13:15:43 crc kubenswrapper[4955]: I0202 13:15:43.428783 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b61aade3-b2b3-4a5f-9862-a2018e56ea03-cert\") pod \"infra-operator-controller-manager-79955696d6-cphc8\" (UID: \"b61aade3-b2b3-4a5f-9862-a2018e56ea03\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-cphc8" Feb 02 13:15:43 crc kubenswrapper[4955]: E0202 13:15:43.429130 4955 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 13:15:43 crc kubenswrapper[4955]: E0202 13:15:43.429237 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b61aade3-b2b3-4a5f-9862-a2018e56ea03-cert podName:b61aade3-b2b3-4a5f-9862-a2018e56ea03 nodeName:}" failed. No retries permitted until 2026-02-02 13:15:47.429210072 +0000 UTC m=+798.341546602 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b61aade3-b2b3-4a5f-9862-a2018e56ea03-cert") pod "infra-operator-controller-manager-79955696d6-cphc8" (UID: "b61aade3-b2b3-4a5f-9862-a2018e56ea03") : secret "infra-operator-webhook-server-cert" not found Feb 02 13:15:43 crc kubenswrapper[4955]: I0202 13:15:43.732226 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/79b15e2e-40e0-4677-b580-667f81fd3550-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v\" (UID: \"79b15e2e-40e0-4677-b580-667f81fd3550\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v" Feb 02 13:15:43 crc kubenswrapper[4955]: E0202 13:15:43.732486 4955 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 13:15:43 crc kubenswrapper[4955]: E0202 13:15:43.732673 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79b15e2e-40e0-4677-b580-667f81fd3550-cert podName:79b15e2e-40e0-4677-b580-667f81fd3550 nodeName:}" failed. No retries permitted until 2026-02-02 13:15:47.732636431 +0000 UTC m=+798.644972881 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/79b15e2e-40e0-4677-b580-667f81fd3550-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v" (UID: "79b15e2e-40e0-4677-b580-667f81fd3550") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 13:15:44 crc kubenswrapper[4955]: I0202 13:15:44.138452 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-webhook-certs\") pod \"openstack-operator-controller-manager-5fb457d788-z4g9c\" (UID: \"6ad0132c-7b8d-4342-8668-23e66e695a6e\") " pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:15:44 crc kubenswrapper[4955]: I0202 13:15:44.138599 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-metrics-certs\") pod \"openstack-operator-controller-manager-5fb457d788-z4g9c\" (UID: \"6ad0132c-7b8d-4342-8668-23e66e695a6e\") " pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:15:44 crc kubenswrapper[4955]: E0202 13:15:44.138650 4955 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 13:15:44 crc kubenswrapper[4955]: E0202 13:15:44.138712 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-webhook-certs podName:6ad0132c-7b8d-4342-8668-23e66e695a6e nodeName:}" failed. No retries permitted until 2026-02-02 13:15:48.138694752 +0000 UTC m=+799.051031202 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-webhook-certs") pod "openstack-operator-controller-manager-5fb457d788-z4g9c" (UID: "6ad0132c-7b8d-4342-8668-23e66e695a6e") : secret "webhook-server-cert" not found Feb 02 13:15:44 crc kubenswrapper[4955]: E0202 13:15:44.138759 4955 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 13:15:44 crc kubenswrapper[4955]: E0202 13:15:44.138797 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-metrics-certs podName:6ad0132c-7b8d-4342-8668-23e66e695a6e nodeName:}" failed. No retries permitted until 2026-02-02 13:15:48.138785984 +0000 UTC m=+799.051122434 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-metrics-certs") pod "openstack-operator-controller-manager-5fb457d788-z4g9c" (UID: "6ad0132c-7b8d-4342-8668-23e66e695a6e") : secret "metrics-server-cert" not found Feb 02 13:15:47 crc kubenswrapper[4955]: I0202 13:15:47.489174 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b61aade3-b2b3-4a5f-9862-a2018e56ea03-cert\") pod \"infra-operator-controller-manager-79955696d6-cphc8\" (UID: \"b61aade3-b2b3-4a5f-9862-a2018e56ea03\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-cphc8" Feb 02 13:15:47 crc kubenswrapper[4955]: E0202 13:15:47.489357 4955 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 13:15:47 crc kubenswrapper[4955]: E0202 13:15:47.489754 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b61aade3-b2b3-4a5f-9862-a2018e56ea03-cert podName:b61aade3-b2b3-4a5f-9862-a2018e56ea03 nodeName:}" failed. No retries permitted until 2026-02-02 13:15:55.489732628 +0000 UTC m=+806.402069138 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b61aade3-b2b3-4a5f-9862-a2018e56ea03-cert") pod "infra-operator-controller-manager-79955696d6-cphc8" (UID: "b61aade3-b2b3-4a5f-9862-a2018e56ea03") : secret "infra-operator-webhook-server-cert" not found Feb 02 13:15:47 crc kubenswrapper[4955]: I0202 13:15:47.794004 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/79b15e2e-40e0-4677-b580-667f81fd3550-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v\" (UID: \"79b15e2e-40e0-4677-b580-667f81fd3550\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v" Feb 02 13:15:47 crc kubenswrapper[4955]: E0202 13:15:47.794197 4955 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 13:15:47 crc kubenswrapper[4955]: E0202 13:15:47.794250 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79b15e2e-40e0-4677-b580-667f81fd3550-cert podName:79b15e2e-40e0-4677-b580-667f81fd3550 nodeName:}" failed. No retries permitted until 2026-02-02 13:15:55.794233723 +0000 UTC m=+806.706570173 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/79b15e2e-40e0-4677-b580-667f81fd3550-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v" (UID: "79b15e2e-40e0-4677-b580-667f81fd3550") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 13:15:48 crc kubenswrapper[4955]: I0202 13:15:48.199296 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-metrics-certs\") pod \"openstack-operator-controller-manager-5fb457d788-z4g9c\" (UID: \"6ad0132c-7b8d-4342-8668-23e66e695a6e\") " pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:15:48 crc kubenswrapper[4955]: I0202 13:15:48.199397 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-webhook-certs\") pod \"openstack-operator-controller-manager-5fb457d788-z4g9c\" (UID: \"6ad0132c-7b8d-4342-8668-23e66e695a6e\") " pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:15:48 crc kubenswrapper[4955]: E0202 13:15:48.199513 4955 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 13:15:48 crc kubenswrapper[4955]: E0202 13:15:48.199566 4955 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 13:15:48 crc kubenswrapper[4955]: E0202 13:15:48.199619 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-metrics-certs podName:6ad0132c-7b8d-4342-8668-23e66e695a6e nodeName:}" failed. No retries permitted until 2026-02-02 13:15:56.199595536 +0000 UTC m=+807.111931986 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-metrics-certs") pod "openstack-operator-controller-manager-5fb457d788-z4g9c" (UID: "6ad0132c-7b8d-4342-8668-23e66e695a6e") : secret "metrics-server-cert" not found Feb 02 13:15:48 crc kubenswrapper[4955]: E0202 13:15:48.199640 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-webhook-certs podName:6ad0132c-7b8d-4342-8668-23e66e695a6e nodeName:}" failed. No retries permitted until 2026-02-02 13:15:56.199631507 +0000 UTC m=+807.111968057 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-webhook-certs") pod "openstack-operator-controller-manager-5fb457d788-z4g9c" (UID: "6ad0132c-7b8d-4342-8668-23e66e695a6e") : secret "webhook-server-cert" not found Feb 02 13:15:54 crc kubenswrapper[4955]: E0202 13:15:54.181864 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Feb 02 13:15:54 crc kubenswrapper[4955]: E0202 13:15:54.182670 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9s54v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-6f8mt_openstack-operators(697fa7c8-fb2d-411e-ad98-d7240bde28ae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:15:54 crc kubenswrapper[4955]: E0202 13:15:54.184604 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-6f8mt" podUID="697fa7c8-fb2d-411e-ad98-d7240bde28ae" Feb 02 13:15:54 crc kubenswrapper[4955]: E0202 13:15:54.330995 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-6f8mt" podUID="697fa7c8-fb2d-411e-ad98-d7240bde28ae" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.328975 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-jh2sx" event={"ID":"74a1c10d-25f8-4436-9762-ddcb86e6bb5e","Type":"ContainerStarted","Data":"28382c0704b96a8e7848eb486342433b58c7fa58b200ca26757bdaf554597847"} Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.329932 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-jh2sx" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.330404 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-9jq6d" event={"ID":"cc234403-1bdb-40c8-a931-62b193347ae7","Type":"ContainerStarted","Data":"0082c418f152d1c7c1692378e9e10ccff0c87a7f045077fb1cdce1cdccb6574a"} Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.330545 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-9jq6d" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.332623 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vbqw7" event={"ID":"43da80bd-2db6-4ee2-becb-fb97aa4e41bf","Type":"ContainerStarted","Data":"0a117e220e78ecbf3c1c11d454149416c16b9b81b7a3950eba6464585aee45c1"} Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.332696 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vbqw7" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.334136 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pwrfk" event={"ID":"4e156b3e-40e8-4ade-af5b-10f05949e12b","Type":"ContainerStarted","Data":"fd9ae7750a67c1c8e59ce78da27ea4f8e01a1c4ae924a27bd5eaf3fb1f62b8b0"} Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.334281 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pwrfk" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.335434 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sd58m" event={"ID":"7b0df3b7-68cf-4cb0-94b0-69b394da89c5","Type":"ContainerStarted","Data":"d89f2c2e9fa78914eba4eaa63394c0bc6be2820e2f2bf97002888a17e425c53c"} Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.335689 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sd58m" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.337049 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bw6fj" event={"ID":"7e8608e6-cd83-4feb-ba63-261fc1a78437","Type":"ContainerStarted","Data":"a935868347a2ed8a6c82f9abb4f2360a53a83728c88f17606eed980c7b30b508"} Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.337138 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bw6fj" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.338353 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-m2vf4" event={"ID":"acac6a68-fe33-41eb-8f49-0fd47cc4f0d4","Type":"ContainerStarted","Data":"b0e6b9780fda55535f1c8a0362db994d190f9fea206f499d2a41fe029297823a"} Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.339710 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-bnxrv" event={"ID":"d0b0d88d-985e-4a4c-9f57-5ef95f00b9ac","Type":"ContainerStarted","Data":"45ac1aa89666882ae8077cc09c7ca38b1905f315a5ba55eb498916b2f6bf92a4"} Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.339837 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-bnxrv" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.341608 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-nmvcl" event={"ID":"fb69ed7c-575f-4abd-8bbd-a5a884e9333b","Type":"ContainerStarted","Data":"9cc0153191766a8f2005797b521f03ef9423ab8d9ba6d1905d8d56da137c702a"} Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.341757 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-nmvcl" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.343371 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2xj4n" event={"ID":"30794b6d-3a42-4d85-bdb3-adaf55b73301","Type":"ContainerStarted","Data":"5af9d7533079a1d31b1ca37d6b59317e6e4bd914f621646dcd0ea299606532cf"} Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.343424 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2xj4n" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.345030 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-99z75" event={"ID":"3ad669b6-5937-4a7a-9d0b-b54da1542c6f","Type":"ContainerStarted","Data":"a9424c62bddf630bbfe318e6d03069d855d2e9d7a8617ce6cf6874475919fc11"} Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.345199 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-99z75" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.346744 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-ts8vb" event={"ID":"0e209e55-35cd-418f-902b-c16a5992677e","Type":"ContainerStarted","Data":"ae0dac9658e9f97f98a425c11d26ae30aa58db9028e04051e13ca888f28a2f46"} Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.346909 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-ts8vb" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.348161 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-q8lhq" event={"ID":"937de608-a64a-40ab-8a80-90800c18cf8f","Type":"ContainerStarted","Data":"72ca320c26f32e48b4cca26d9141db97d75335c64501c4a941f38291b28369a6"} Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.348319 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-q8lhq" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.349655 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7zjp6" event={"ID":"f25860b5-436a-486b-9d7d-065f19ac7f68","Type":"ContainerStarted","Data":"cf4de3b5e0c11310465a945fc67e393db19264cf4747a4a86a3a852830ea824b"} Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.349794 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7zjp6" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.353353 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-jh2sx" podStartSLOduration=3.310977308 podStartE2EDuration="16.353330578s" podCreationTimestamp="2026-02-02 13:15:39 +0000 UTC" firstStartedPulling="2026-02-02 13:15:41.205711125 +0000 UTC m=+792.118047575" lastFinishedPulling="2026-02-02 13:15:54.248064395 +0000 UTC m=+805.160400845" observedRunningTime="2026-02-02 13:15:55.351624016 +0000 UTC m=+806.263960466" watchObservedRunningTime="2026-02-02 13:15:55.353330578 +0000 UTC m=+806.265667028" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.386299 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-q8lhq" podStartSLOduration=3.378056114 podStartE2EDuration="16.386272481s" podCreationTimestamp="2026-02-02 13:15:39 +0000 UTC" firstStartedPulling="2026-02-02 13:15:41.190658773 +0000 UTC m=+792.102995233" lastFinishedPulling="2026-02-02 13:15:54.19887515 +0000 UTC m=+805.111211600" observedRunningTime="2026-02-02 13:15:55.380671696 +0000 UTC m=+806.293008156" watchObservedRunningTime="2026-02-02 13:15:55.386272481 +0000 UTC m=+806.298608931" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.409478 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-9jq6d" podStartSLOduration=3.251514886 podStartE2EDuration="16.40946012s" podCreationTimestamp="2026-02-02 13:15:39 +0000 UTC" firstStartedPulling="2026-02-02 13:15:41.037740369 +0000 UTC m=+791.950076819" lastFinishedPulling="2026-02-02 13:15:54.195685603 +0000 UTC m=+805.108022053" observedRunningTime="2026-02-02 13:15:55.408130607 +0000 UTC m=+806.320467077" watchObservedRunningTime="2026-02-02 13:15:55.40946012 +0000 UTC m=+806.321796590" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.427285 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bw6fj" podStartSLOduration=3.509388677 podStartE2EDuration="16.427258818s" podCreationTimestamp="2026-02-02 13:15:39 +0000 UTC" firstStartedPulling="2026-02-02 13:15:41.214277351 +0000 UTC m=+792.126613791" lastFinishedPulling="2026-02-02 13:15:54.132147482 +0000 UTC m=+805.044483932" observedRunningTime="2026-02-02 13:15:55.42401155 +0000 UTC m=+806.336348010" watchObservedRunningTime="2026-02-02 13:15:55.427258818 +0000 UTC m=+806.339595268" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.451737 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-m2vf4" podStartSLOduration=2.972951516 podStartE2EDuration="16.451721887s" podCreationTimestamp="2026-02-02 13:15:39 +0000 UTC" firstStartedPulling="2026-02-02 13:15:40.653447653 +0000 UTC m=+791.565784093" lastFinishedPulling="2026-02-02 13:15:54.132218014 +0000 UTC m=+805.044554464" observedRunningTime="2026-02-02 13:15:55.448314055 +0000 UTC m=+806.360650505" watchObservedRunningTime="2026-02-02 13:15:55.451721887 +0000 UTC m=+806.364058337" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.480817 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7zjp6" podStartSLOduration=3.361676881 podStartE2EDuration="16.480790228s" podCreationTimestamp="2026-02-02 13:15:39 +0000 UTC" firstStartedPulling="2026-02-02 13:15:41.059283269 +0000 UTC m=+791.971619709" lastFinishedPulling="2026-02-02 13:15:54.178396606 +0000 UTC m=+805.090733056" observedRunningTime="2026-02-02 13:15:55.476405781 +0000 UTC m=+806.388742241" watchObservedRunningTime="2026-02-02 13:15:55.480790228 +0000 UTC m=+806.393126678" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.510493 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b61aade3-b2b3-4a5f-9862-a2018e56ea03-cert\") pod \"infra-operator-controller-manager-79955696d6-cphc8\" (UID: \"b61aade3-b2b3-4a5f-9862-a2018e56ea03\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-cphc8" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.517025 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b61aade3-b2b3-4a5f-9862-a2018e56ea03-cert\") pod \"infra-operator-controller-manager-79955696d6-cphc8\" (UID: \"b61aade3-b2b3-4a5f-9862-a2018e56ea03\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-cphc8" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.542438 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-99z75" podStartSLOduration=3.42728531 podStartE2EDuration="16.542414272s" podCreationTimestamp="2026-02-02 13:15:39 +0000 UTC" firstStartedPulling="2026-02-02 13:15:41.05436505 +0000 UTC m=+791.966701500" lastFinishedPulling="2026-02-02 13:15:54.169494012 +0000 UTC m=+805.081830462" observedRunningTime="2026-02-02 13:15:55.513219969 +0000 UTC m=+806.425556409" watchObservedRunningTime="2026-02-02 13:15:55.542414272 +0000 UTC m=+806.454750732" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.546992 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-ts8vb" podStartSLOduration=2.979532105 podStartE2EDuration="16.546968342s" podCreationTimestamp="2026-02-02 13:15:39 +0000 UTC" firstStartedPulling="2026-02-02 13:15:40.666222311 +0000 UTC m=+791.578558761" lastFinishedPulling="2026-02-02 13:15:54.233658518 +0000 UTC m=+805.145994998" observedRunningTime="2026-02-02 13:15:55.535324461 +0000 UTC m=+806.447660931" watchObservedRunningTime="2026-02-02 13:15:55.546968342 +0000 UTC m=+806.459304792" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.568026 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vbqw7" podStartSLOduration=3.466433392 podStartE2EDuration="16.568000488s" podCreationTimestamp="2026-02-02 13:15:39 +0000 UTC" firstStartedPulling="2026-02-02 13:15:41.067904846 +0000 UTC m=+791.980241296" lastFinishedPulling="2026-02-02 13:15:54.169471942 +0000 UTC m=+805.081808392" observedRunningTime="2026-02-02 13:15:55.567861815 +0000 UTC m=+806.480198265" watchObservedRunningTime="2026-02-02 13:15:55.568000488 +0000 UTC m=+806.480336938" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.575834 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-cphc8" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.613966 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-nmvcl" podStartSLOduration=3.654312018 podStartE2EDuration="16.613907084s" podCreationTimestamp="2026-02-02 13:15:39 +0000 UTC" firstStartedPulling="2026-02-02 13:15:41.209870276 +0000 UTC m=+792.122206726" lastFinishedPulling="2026-02-02 13:15:54.169465342 +0000 UTC m=+805.081801792" observedRunningTime="2026-02-02 13:15:55.601040123 +0000 UTC m=+806.513376583" watchObservedRunningTime="2026-02-02 13:15:55.613907084 +0000 UTC m=+806.526243534" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.657811 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pwrfk" podStartSLOduration=3.931994225 podStartE2EDuration="16.65777925s" podCreationTimestamp="2026-02-02 13:15:39 +0000 UTC" firstStartedPulling="2026-02-02 13:15:41.406377528 +0000 UTC m=+792.318713978" lastFinishedPulling="2026-02-02 13:15:54.132162553 +0000 UTC m=+805.044499003" observedRunningTime="2026-02-02 13:15:55.638170948 +0000 UTC m=+806.550507408" watchObservedRunningTime="2026-02-02 13:15:55.65777925 +0000 UTC m=+806.570115700" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.688029 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sd58m" podStartSLOduration=3.558177322 podStartE2EDuration="16.688008149s" podCreationTimestamp="2026-02-02 13:15:39 +0000 UTC" firstStartedPulling="2026-02-02 13:15:40.494427113 +0000 UTC m=+791.406763563" lastFinishedPulling="2026-02-02 13:15:53.62425794 +0000 UTC m=+804.536594390" observedRunningTime="2026-02-02 13:15:55.67604479 +0000 UTC m=+806.588381250" watchObservedRunningTime="2026-02-02 13:15:55.688008149 +0000 UTC m=+806.600344609" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.731543 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2xj4n" podStartSLOduration=3.599068407 podStartE2EDuration="16.731506347s" podCreationTimestamp="2026-02-02 13:15:39 +0000 UTC" firstStartedPulling="2026-02-02 13:15:41.067285681 +0000 UTC m=+791.979622131" lastFinishedPulling="2026-02-02 13:15:54.199723621 +0000 UTC m=+805.112060071" observedRunningTime="2026-02-02 13:15:55.710271204 +0000 UTC m=+806.622607654" watchObservedRunningTime="2026-02-02 13:15:55.731506347 +0000 UTC m=+806.643842797" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.758960 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-bnxrv" podStartSLOduration=3.079787397 podStartE2EDuration="15.758935867s" podCreationTimestamp="2026-02-02 13:15:40 +0000 UTC" firstStartedPulling="2026-02-02 13:15:41.498949159 +0000 UTC m=+792.411285609" lastFinishedPulling="2026-02-02 13:15:54.178097629 +0000 UTC m=+805.090434079" observedRunningTime="2026-02-02 13:15:55.740081053 +0000 UTC m=+806.652417513" watchObservedRunningTime="2026-02-02 13:15:55.758935867 +0000 UTC m=+806.671272317" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.824306 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/79b15e2e-40e0-4677-b580-667f81fd3550-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v\" (UID: \"79b15e2e-40e0-4677-b580-667f81fd3550\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.852703 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/79b15e2e-40e0-4677-b580-667f81fd3550-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v\" (UID: \"79b15e2e-40e0-4677-b580-667f81fd3550\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v" Feb 02 13:15:55 crc kubenswrapper[4955]: I0202 13:15:55.938398 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v" Feb 02 13:15:56 crc kubenswrapper[4955]: I0202 13:15:56.233094 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-cphc8"] Feb 02 13:15:56 crc kubenswrapper[4955]: I0202 13:15:56.234199 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-metrics-certs\") pod \"openstack-operator-controller-manager-5fb457d788-z4g9c\" (UID: \"6ad0132c-7b8d-4342-8668-23e66e695a6e\") " pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:15:56 crc kubenswrapper[4955]: I0202 13:15:56.234318 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-webhook-certs\") pod \"openstack-operator-controller-manager-5fb457d788-z4g9c\" (UID: \"6ad0132c-7b8d-4342-8668-23e66e695a6e\") " pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:15:56 crc kubenswrapper[4955]: E0202 13:15:56.234505 4955 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 13:15:56 crc kubenswrapper[4955]: E0202 13:15:56.234590 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-webhook-certs podName:6ad0132c-7b8d-4342-8668-23e66e695a6e nodeName:}" failed. No retries permitted until 2026-02-02 13:16:12.234552653 +0000 UTC m=+823.146889103 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-webhook-certs") pod "openstack-operator-controller-manager-5fb457d788-z4g9c" (UID: "6ad0132c-7b8d-4342-8668-23e66e695a6e") : secret "webhook-server-cert" not found Feb 02 13:15:56 crc kubenswrapper[4955]: I0202 13:15:56.254596 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-metrics-certs\") pod \"openstack-operator-controller-manager-5fb457d788-z4g9c\" (UID: \"6ad0132c-7b8d-4342-8668-23e66e695a6e\") " pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:15:56 crc kubenswrapper[4955]: I0202 13:15:56.356772 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-cphc8" event={"ID":"b61aade3-b2b3-4a5f-9862-a2018e56ea03","Type":"ContainerStarted","Data":"0ddb8a76bb21221123aa3ee52f4fdb297b1840b102619acffbc413d1f52131c5"} Feb 02 13:15:56 crc kubenswrapper[4955]: I0202 13:15:56.357688 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-m2vf4" Feb 02 13:15:56 crc kubenswrapper[4955]: I0202 13:15:56.446244 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v"] Feb 02 13:15:56 crc kubenswrapper[4955]: W0202 13:15:56.446507 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79b15e2e_40e0_4677_b580_667f81fd3550.slice/crio-6b55eb296449c43a9e10bd7780ec24e237e83c49b9584c88445472c4fd3bf7c6 WatchSource:0}: Error finding container 6b55eb296449c43a9e10bd7780ec24e237e83c49b9584c88445472c4fd3bf7c6: Status 404 returned error can't find the container with id 6b55eb296449c43a9e10bd7780ec24e237e83c49b9584c88445472c4fd3bf7c6 Feb 02 13:15:57 crc kubenswrapper[4955]: I0202 13:15:57.363936 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v" event={"ID":"79b15e2e-40e0-4677-b580-667f81fd3550","Type":"ContainerStarted","Data":"6b55eb296449c43a9e10bd7780ec24e237e83c49b9584c88445472c4fd3bf7c6"} Feb 02 13:15:59 crc kubenswrapper[4955]: I0202 13:15:59.814524 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sd58m" Feb 02 13:15:59 crc kubenswrapper[4955]: I0202 13:15:59.842877 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-ts8vb" Feb 02 13:15:59 crc kubenswrapper[4955]: I0202 13:15:59.855925 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-m2vf4" Feb 02 13:15:59 crc kubenswrapper[4955]: I0202 13:15:59.895130 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-9jq6d" Feb 02 13:15:59 crc kubenswrapper[4955]: I0202 13:15:59.920633 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7zjp6" Feb 02 13:15:59 crc kubenswrapper[4955]: I0202 13:15:59.954702 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-99z75" Feb 02 13:16:00 crc kubenswrapper[4955]: I0202 13:16:00.045795 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vbqw7" Feb 02 13:16:00 crc kubenswrapper[4955]: I0202 13:16:00.088845 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2xj4n" Feb 02 13:16:00 crc kubenswrapper[4955]: I0202 13:16:00.176304 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bw6fj" Feb 02 13:16:00 crc kubenswrapper[4955]: I0202 13:16:00.246854 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-jh2sx" Feb 02 13:16:00 crc kubenswrapper[4955]: I0202 13:16:00.278973 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-nmvcl" Feb 02 13:16:00 crc kubenswrapper[4955]: I0202 13:16:00.368214 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-q8lhq" Feb 02 13:16:00 crc kubenswrapper[4955]: I0202 13:16:00.590341 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pwrfk" Feb 02 13:16:00 crc kubenswrapper[4955]: I0202 13:16:00.704295 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-bnxrv" Feb 02 13:16:04 crc kubenswrapper[4955]: I0202 13:16:04.415072 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx" event={"ID":"0e5f1bee-07dd-4eaf-9a3b-328845abb141","Type":"ContainerStarted","Data":"691d8d1951142af1d64c836d20c7932f85b630ebf1989727a8e25757f5033b59"} Feb 02 13:16:04 crc kubenswrapper[4955]: I0202 13:16:04.415860 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx" Feb 02 13:16:04 crc kubenswrapper[4955]: I0202 13:16:04.417694 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-ndfws" event={"ID":"212960c6-7b05-4094-ade0-e957cb3b76c8","Type":"ContainerStarted","Data":"74dfadf96e83b76cf20f825d52fa10a65d07a074a00fe5caf32f4627cc877d8a"} Feb 02 13:16:04 crc kubenswrapper[4955]: I0202 13:16:04.418050 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-ndfws" Feb 02 13:16:04 crc kubenswrapper[4955]: I0202 13:16:04.419157 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v" event={"ID":"79b15e2e-40e0-4677-b580-667f81fd3550","Type":"ContainerStarted","Data":"0809fd12a44e47c1fb25ec7ebcfba9925f981aab040c628bcfe8332d24c9e527"} Feb 02 13:16:04 crc kubenswrapper[4955]: I0202 13:16:04.419288 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v" Feb 02 13:16:04 crc kubenswrapper[4955]: I0202 13:16:04.420763 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-npj7b" event={"ID":"f231eb80-56bc-48bd-b412-a4247d11317f","Type":"ContainerStarted","Data":"28221d73441653ba7ee0d0ab3c721f37b6bec327878963d94778532d27b795dc"} Feb 02 13:16:04 crc kubenswrapper[4955]: I0202 13:16:04.422176 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-cphc8" event={"ID":"b61aade3-b2b3-4a5f-9862-a2018e56ea03","Type":"ContainerStarted","Data":"1cf5f8e110da9ebca7ac96fd1bde3e59e175ddf3394d3a0d9a5570213de31dc6"} Feb 02 13:16:04 crc kubenswrapper[4955]: I0202 13:16:04.422306 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-cphc8" Feb 02 13:16:04 crc kubenswrapper[4955]: I0202 13:16:04.423533 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hxf5r" event={"ID":"ae3aa85d-803e-42ca-aff0-b2f5cb660355","Type":"ContainerStarted","Data":"6dfb5a5f5013fa3dcea5e84c9acea61c873308d178d89678b0ff7a34065a5167"} Feb 02 13:16:04 crc kubenswrapper[4955]: I0202 13:16:04.423817 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hxf5r" Feb 02 13:16:04 crc kubenswrapper[4955]: I0202 13:16:04.425107 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-29k76" event={"ID":"14a7569a-24fd-4aea-828d-ada50de34686","Type":"ContainerStarted","Data":"e930c0d34e744dab127c09c4ca79c46216f444145775e9f19e273b1baef8cc16"} Feb 02 13:16:04 crc kubenswrapper[4955]: I0202 13:16:04.425350 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-29k76" Feb 02 13:16:04 crc kubenswrapper[4955]: I0202 13:16:04.434532 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx" podStartSLOduration=2.847623938 podStartE2EDuration="25.434519445s" podCreationTimestamp="2026-02-02 13:15:39 +0000 UTC" firstStartedPulling="2026-02-02 13:15:41.407869925 +0000 UTC m=+792.320206365" lastFinishedPulling="2026-02-02 13:16:03.994765422 +0000 UTC m=+814.907101872" observedRunningTime="2026-02-02 13:16:04.431856761 +0000 UTC m=+815.344193241" watchObservedRunningTime="2026-02-02 13:16:04.434519445 +0000 UTC m=+815.346855895" Feb 02 13:16:04 crc kubenswrapper[4955]: I0202 13:16:04.463233 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v" podStartSLOduration=17.952723201 podStartE2EDuration="25.463216406s" podCreationTimestamp="2026-02-02 13:15:39 +0000 UTC" firstStartedPulling="2026-02-02 13:15:56.452121033 +0000 UTC m=+807.364457483" lastFinishedPulling="2026-02-02 13:16:03.962614238 +0000 UTC m=+814.874950688" observedRunningTime="2026-02-02 13:16:04.458039981 +0000 UTC m=+815.370376431" watchObservedRunningTime="2026-02-02 13:16:04.463216406 +0000 UTC m=+815.375552856" Feb 02 13:16:04 crc kubenswrapper[4955]: I0202 13:16:04.477639 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-ndfws" podStartSLOduration=2.7627060610000003 podStartE2EDuration="25.477622933s" podCreationTimestamp="2026-02-02 13:15:39 +0000 UTC" firstStartedPulling="2026-02-02 13:15:41.241903477 +0000 UTC m=+792.154239927" lastFinishedPulling="2026-02-02 13:16:03.956820349 +0000 UTC m=+814.869156799" observedRunningTime="2026-02-02 13:16:04.473982026 +0000 UTC m=+815.386318496" watchObservedRunningTime="2026-02-02 13:16:04.477622933 +0000 UTC m=+815.389959383" Feb 02 13:16:04 crc kubenswrapper[4955]: I0202 13:16:04.498852 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hxf5r" podStartSLOduration=4.132599788 podStartE2EDuration="25.498810783s" podCreationTimestamp="2026-02-02 13:15:39 +0000 UTC" firstStartedPulling="2026-02-02 13:15:41.250609607 +0000 UTC m=+792.162946057" lastFinishedPulling="2026-02-02 13:16:02.616820602 +0000 UTC m=+813.529157052" observedRunningTime="2026-02-02 13:16:04.496549699 +0000 UTC m=+815.408886149" watchObservedRunningTime="2026-02-02 13:16:04.498810783 +0000 UTC m=+815.411147233" Feb 02 13:16:04 crc kubenswrapper[4955]: I0202 13:16:04.537239 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-cphc8" podStartSLOduration=19.165669899 podStartE2EDuration="25.537217929s" podCreationTimestamp="2026-02-02 13:15:39 +0000 UTC" firstStartedPulling="2026-02-02 13:15:56.245270492 +0000 UTC m=+807.157606932" lastFinishedPulling="2026-02-02 13:16:02.616818492 +0000 UTC m=+813.529154962" observedRunningTime="2026-02-02 13:16:04.518588421 +0000 UTC m=+815.430924861" watchObservedRunningTime="2026-02-02 13:16:04.537217929 +0000 UTC m=+815.449554379" Feb 02 13:16:04 crc kubenswrapper[4955]: I0202 13:16:04.547366 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-npj7b" podStartSLOduration=2.060833163 podStartE2EDuration="24.547342423s" podCreationTimestamp="2026-02-02 13:15:40 +0000 UTC" firstStartedPulling="2026-02-02 13:15:41.50479841 +0000 UTC m=+792.417134860" lastFinishedPulling="2026-02-02 13:16:03.99130767 +0000 UTC m=+814.903644120" observedRunningTime="2026-02-02 13:16:04.5409852 +0000 UTC m=+815.453321650" watchObservedRunningTime="2026-02-02 13:16:04.547342423 +0000 UTC m=+815.459678873" Feb 02 13:16:04 crc kubenswrapper[4955]: I0202 13:16:04.557487 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-29k76" podStartSLOduration=3.002013625 podStartE2EDuration="25.557469646s" podCreationTimestamp="2026-02-02 13:15:39 +0000 UTC" firstStartedPulling="2026-02-02 13:15:41.411788029 +0000 UTC m=+792.324124479" lastFinishedPulling="2026-02-02 13:16:03.96724405 +0000 UTC m=+814.879580500" observedRunningTime="2026-02-02 13:16:04.557332413 +0000 UTC m=+815.469668873" watchObservedRunningTime="2026-02-02 13:16:04.557469646 +0000 UTC m=+815.469806116" Feb 02 13:16:10 crc kubenswrapper[4955]: I0202 13:16:10.228772 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-ndfws" Feb 02 13:16:10 crc kubenswrapper[4955]: I0202 13:16:10.309545 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-29k76" Feb 02 13:16:10 crc kubenswrapper[4955]: I0202 13:16:10.437478 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hxf5r" Feb 02 13:16:10 crc kubenswrapper[4955]: I0202 13:16:10.503446 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx" Feb 02 13:16:12 crc kubenswrapper[4955]: I0202 13:16:12.285932 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-webhook-certs\") pod \"openstack-operator-controller-manager-5fb457d788-z4g9c\" (UID: \"6ad0132c-7b8d-4342-8668-23e66e695a6e\") " pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:16:12 crc kubenswrapper[4955]: I0202 13:16:12.292121 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6ad0132c-7b8d-4342-8668-23e66e695a6e-webhook-certs\") pod \"openstack-operator-controller-manager-5fb457d788-z4g9c\" (UID: \"6ad0132c-7b8d-4342-8668-23e66e695a6e\") " pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:16:12 crc kubenswrapper[4955]: I0202 13:16:12.507803 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:16:12 crc kubenswrapper[4955]: I0202 13:16:12.913749 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c"] Feb 02 13:16:12 crc kubenswrapper[4955]: W0202 13:16:12.916758 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ad0132c_7b8d_4342_8668_23e66e695a6e.slice/crio-2412e00a0b98f0059166e549bd07c6a722b0e8e8fedf593024b6195407808d6f WatchSource:0}: Error finding container 2412e00a0b98f0059166e549bd07c6a722b0e8e8fedf593024b6195407808d6f: Status 404 returned error can't find the container with id 2412e00a0b98f0059166e549bd07c6a722b0e8e8fedf593024b6195407808d6f Feb 02 13:16:13 crc kubenswrapper[4955]: I0202 13:16:13.500140 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" event={"ID":"6ad0132c-7b8d-4342-8668-23e66e695a6e","Type":"ContainerStarted","Data":"2412e00a0b98f0059166e549bd07c6a722b0e8e8fedf593024b6195407808d6f"} Feb 02 13:16:15 crc kubenswrapper[4955]: I0202 13:16:15.583158 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-cphc8" Feb 02 13:16:15 crc kubenswrapper[4955]: I0202 13:16:15.945300 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v" Feb 02 13:16:18 crc kubenswrapper[4955]: I0202 13:16:18.532695 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" event={"ID":"6ad0132c-7b8d-4342-8668-23e66e695a6e","Type":"ContainerStarted","Data":"f8fe609abdb5321684e50c44b032575f06b4d7514518de6cb3479f1596d20137"} Feb 02 13:16:18 crc kubenswrapper[4955]: I0202 13:16:18.533706 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:16:18 crc kubenswrapper[4955]: I0202 13:16:18.534803 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-6f8mt" event={"ID":"697fa7c8-fb2d-411e-ad98-d7240bde28ae","Type":"ContainerStarted","Data":"8775faddac5416c52a19c8ae1dafdd61eaf3a3c9432ec66042a81e85a84c4309"} Feb 02 13:16:18 crc kubenswrapper[4955]: I0202 13:16:18.534985 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-6f8mt" Feb 02 13:16:18 crc kubenswrapper[4955]: I0202 13:16:18.569044 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-6f8mt" podStartSLOduration=2.495430084 podStartE2EDuration="39.569028102s" podCreationTimestamp="2026-02-02 13:15:39 +0000 UTC" firstStartedPulling="2026-02-02 13:15:41.011443926 +0000 UTC m=+791.923780376" lastFinishedPulling="2026-02-02 13:16:18.085041934 +0000 UTC m=+828.997378394" observedRunningTime="2026-02-02 13:16:18.567038754 +0000 UTC m=+829.479375204" watchObservedRunningTime="2026-02-02 13:16:18.569028102 +0000 UTC m=+829.481364552" Feb 02 13:16:18 crc kubenswrapper[4955]: I0202 13:16:18.571829 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" podStartSLOduration=38.57182194 podStartE2EDuration="38.57182194s" podCreationTimestamp="2026-02-02 13:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:16:18.555575328 +0000 UTC m=+829.467911778" watchObservedRunningTime="2026-02-02 13:16:18.57182194 +0000 UTC m=+829.484158390" Feb 02 13:16:22 crc kubenswrapper[4955]: I0202 13:16:22.514297 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5fb457d788-z4g9c" Feb 02 13:16:30 crc kubenswrapper[4955]: I0202 13:16:30.070589 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-6f8mt" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.210461 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-k7qvh"] Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.212917 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-k7qvh" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.214465 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.217885 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-pzdt7" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.217997 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.218364 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.234640 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-k7qvh"] Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.265290 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2zgdn"] Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.266528 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2zgdn" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.270995 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.283496 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2zgdn"] Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.319669 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65604713-c812-403a-8f50-523cc35d4e30-config\") pod \"dnsmasq-dns-675f4bcbfc-k7qvh\" (UID: \"65604713-c812-403a-8f50-523cc35d4e30\") " pod="openstack/dnsmasq-dns-675f4bcbfc-k7qvh" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.319773 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2bv9\" (UniqueName: \"kubernetes.io/projected/65604713-c812-403a-8f50-523cc35d4e30-kube-api-access-h2bv9\") pod \"dnsmasq-dns-675f4bcbfc-k7qvh\" (UID: \"65604713-c812-403a-8f50-523cc35d4e30\") " pod="openstack/dnsmasq-dns-675f4bcbfc-k7qvh" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.421344 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2bv9\" (UniqueName: \"kubernetes.io/projected/65604713-c812-403a-8f50-523cc35d4e30-kube-api-access-h2bv9\") pod \"dnsmasq-dns-675f4bcbfc-k7qvh\" (UID: \"65604713-c812-403a-8f50-523cc35d4e30\") " pod="openstack/dnsmasq-dns-675f4bcbfc-k7qvh" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.421422 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/408d264a-b7ba-4686-a994-85c0a9385e40-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2zgdn\" (UID: \"408d264a-b7ba-4686-a994-85c0a9385e40\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2zgdn" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.421479 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/408d264a-b7ba-4686-a994-85c0a9385e40-config\") pod \"dnsmasq-dns-78dd6ddcc-2zgdn\" (UID: \"408d264a-b7ba-4686-a994-85c0a9385e40\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2zgdn" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.421524 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65604713-c812-403a-8f50-523cc35d4e30-config\") pod \"dnsmasq-dns-675f4bcbfc-k7qvh\" (UID: \"65604713-c812-403a-8f50-523cc35d4e30\") " pod="openstack/dnsmasq-dns-675f4bcbfc-k7qvh" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.421577 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ldnw\" (UniqueName: \"kubernetes.io/projected/408d264a-b7ba-4686-a994-85c0a9385e40-kube-api-access-7ldnw\") pod \"dnsmasq-dns-78dd6ddcc-2zgdn\" (UID: \"408d264a-b7ba-4686-a994-85c0a9385e40\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2zgdn" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.422828 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65604713-c812-403a-8f50-523cc35d4e30-config\") pod \"dnsmasq-dns-675f4bcbfc-k7qvh\" (UID: \"65604713-c812-403a-8f50-523cc35d4e30\") " pod="openstack/dnsmasq-dns-675f4bcbfc-k7qvh" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.443879 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2bv9\" (UniqueName: \"kubernetes.io/projected/65604713-c812-403a-8f50-523cc35d4e30-kube-api-access-h2bv9\") pod \"dnsmasq-dns-675f4bcbfc-k7qvh\" (UID: \"65604713-c812-403a-8f50-523cc35d4e30\") " pod="openstack/dnsmasq-dns-675f4bcbfc-k7qvh" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.522332 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/408d264a-b7ba-4686-a994-85c0a9385e40-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2zgdn\" (UID: \"408d264a-b7ba-4686-a994-85c0a9385e40\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2zgdn" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.522721 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/408d264a-b7ba-4686-a994-85c0a9385e40-config\") pod \"dnsmasq-dns-78dd6ddcc-2zgdn\" (UID: \"408d264a-b7ba-4686-a994-85c0a9385e40\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2zgdn" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.522773 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ldnw\" (UniqueName: \"kubernetes.io/projected/408d264a-b7ba-4686-a994-85c0a9385e40-kube-api-access-7ldnw\") pod \"dnsmasq-dns-78dd6ddcc-2zgdn\" (UID: \"408d264a-b7ba-4686-a994-85c0a9385e40\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2zgdn" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.523282 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/408d264a-b7ba-4686-a994-85c0a9385e40-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2zgdn\" (UID: \"408d264a-b7ba-4686-a994-85c0a9385e40\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2zgdn" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.523740 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/408d264a-b7ba-4686-a994-85c0a9385e40-config\") pod \"dnsmasq-dns-78dd6ddcc-2zgdn\" (UID: \"408d264a-b7ba-4686-a994-85c0a9385e40\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2zgdn" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.537008 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-k7qvh" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.540925 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ldnw\" (UniqueName: \"kubernetes.io/projected/408d264a-b7ba-4686-a994-85c0a9385e40-kube-api-access-7ldnw\") pod \"dnsmasq-dns-78dd6ddcc-2zgdn\" (UID: \"408d264a-b7ba-4686-a994-85c0a9385e40\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2zgdn" Feb 02 13:16:47 crc kubenswrapper[4955]: I0202 13:16:47.582744 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2zgdn" Feb 02 13:16:48 crc kubenswrapper[4955]: I0202 13:16:48.011803 4955 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 13:16:48 crc kubenswrapper[4955]: I0202 13:16:48.013882 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-k7qvh"] Feb 02 13:16:48 crc kubenswrapper[4955]: I0202 13:16:48.108309 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2zgdn"] Feb 02 13:16:48 crc kubenswrapper[4955]: W0202 13:16:48.111878 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod408d264a_b7ba_4686_a994_85c0a9385e40.slice/crio-25fd3081078e5096fad84f2080d511d52a59a7db1bf7153d101941cc1f55e125 WatchSource:0}: Error finding container 25fd3081078e5096fad84f2080d511d52a59a7db1bf7153d101941cc1f55e125: Status 404 returned error can't find the container with id 25fd3081078e5096fad84f2080d511d52a59a7db1bf7153d101941cc1f55e125 Feb 02 13:16:48 crc kubenswrapper[4955]: I0202 13:16:48.750073 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-k7qvh" event={"ID":"65604713-c812-403a-8f50-523cc35d4e30","Type":"ContainerStarted","Data":"2e4b9ef917bd48b1cdae16a94febd997c3c865afdfc6e4941fb015655f83c16f"} Feb 02 13:16:48 crc kubenswrapper[4955]: I0202 13:16:48.750932 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-2zgdn" event={"ID":"408d264a-b7ba-4686-a994-85c0a9385e40","Type":"ContainerStarted","Data":"25fd3081078e5096fad84f2080d511d52a59a7db1bf7153d101941cc1f55e125"} Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.160046 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-k7qvh"] Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.179248 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-m4f8x"] Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.180642 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-m4f8x" Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.189356 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-m4f8x"] Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.363327 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a56fa80-19ba-40de-a107-32577f31ed7a-config\") pod \"dnsmasq-dns-666b6646f7-m4f8x\" (UID: \"5a56fa80-19ba-40de-a107-32577f31ed7a\") " pod="openstack/dnsmasq-dns-666b6646f7-m4f8x" Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.363637 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bwwz\" (UniqueName: \"kubernetes.io/projected/5a56fa80-19ba-40de-a107-32577f31ed7a-kube-api-access-4bwwz\") pod \"dnsmasq-dns-666b6646f7-m4f8x\" (UID: \"5a56fa80-19ba-40de-a107-32577f31ed7a\") " pod="openstack/dnsmasq-dns-666b6646f7-m4f8x" Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.363779 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a56fa80-19ba-40de-a107-32577f31ed7a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-m4f8x\" (UID: \"5a56fa80-19ba-40de-a107-32577f31ed7a\") " pod="openstack/dnsmasq-dns-666b6646f7-m4f8x" Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.469340 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a56fa80-19ba-40de-a107-32577f31ed7a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-m4f8x\" (UID: \"5a56fa80-19ba-40de-a107-32577f31ed7a\") " pod="openstack/dnsmasq-dns-666b6646f7-m4f8x" Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.469410 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a56fa80-19ba-40de-a107-32577f31ed7a-config\") pod \"dnsmasq-dns-666b6646f7-m4f8x\" (UID: \"5a56fa80-19ba-40de-a107-32577f31ed7a\") " pod="openstack/dnsmasq-dns-666b6646f7-m4f8x" Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.469459 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bwwz\" (UniqueName: \"kubernetes.io/projected/5a56fa80-19ba-40de-a107-32577f31ed7a-kube-api-access-4bwwz\") pod \"dnsmasq-dns-666b6646f7-m4f8x\" (UID: \"5a56fa80-19ba-40de-a107-32577f31ed7a\") " pod="openstack/dnsmasq-dns-666b6646f7-m4f8x" Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.470594 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a56fa80-19ba-40de-a107-32577f31ed7a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-m4f8x\" (UID: \"5a56fa80-19ba-40de-a107-32577f31ed7a\") " pod="openstack/dnsmasq-dns-666b6646f7-m4f8x" Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.478202 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a56fa80-19ba-40de-a107-32577f31ed7a-config\") pod \"dnsmasq-dns-666b6646f7-m4f8x\" (UID: \"5a56fa80-19ba-40de-a107-32577f31ed7a\") " pod="openstack/dnsmasq-dns-666b6646f7-m4f8x" Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.507542 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bwwz\" (UniqueName: \"kubernetes.io/projected/5a56fa80-19ba-40de-a107-32577f31ed7a-kube-api-access-4bwwz\") pod \"dnsmasq-dns-666b6646f7-m4f8x\" (UID: \"5a56fa80-19ba-40de-a107-32577f31ed7a\") " pod="openstack/dnsmasq-dns-666b6646f7-m4f8x" Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.554494 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2zgdn"] Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.574027 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-m4f8x" Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.605071 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qvpbz"] Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.606429 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qvpbz" Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.617333 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qvpbz"] Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.783003 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff8da18-59aa-4616-b217-66f739215534-config\") pod \"dnsmasq-dns-57d769cc4f-qvpbz\" (UID: \"dff8da18-59aa-4616-b217-66f739215534\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvpbz" Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.783603 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dff8da18-59aa-4616-b217-66f739215534-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qvpbz\" (UID: \"dff8da18-59aa-4616-b217-66f739215534\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvpbz" Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.783636 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjg4j\" (UniqueName: \"kubernetes.io/projected/dff8da18-59aa-4616-b217-66f739215534-kube-api-access-xjg4j\") pod \"dnsmasq-dns-57d769cc4f-qvpbz\" (UID: \"dff8da18-59aa-4616-b217-66f739215534\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvpbz" Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.890434 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dff8da18-59aa-4616-b217-66f739215534-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qvpbz\" (UID: \"dff8da18-59aa-4616-b217-66f739215534\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvpbz" Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.890487 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjg4j\" (UniqueName: \"kubernetes.io/projected/dff8da18-59aa-4616-b217-66f739215534-kube-api-access-xjg4j\") pod \"dnsmasq-dns-57d769cc4f-qvpbz\" (UID: \"dff8da18-59aa-4616-b217-66f739215534\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvpbz" Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.890660 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff8da18-59aa-4616-b217-66f739215534-config\") pod \"dnsmasq-dns-57d769cc4f-qvpbz\" (UID: \"dff8da18-59aa-4616-b217-66f739215534\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvpbz" Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.891636 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dff8da18-59aa-4616-b217-66f739215534-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qvpbz\" (UID: \"dff8da18-59aa-4616-b217-66f739215534\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvpbz" Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.892339 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff8da18-59aa-4616-b217-66f739215534-config\") pod \"dnsmasq-dns-57d769cc4f-qvpbz\" (UID: \"dff8da18-59aa-4616-b217-66f739215534\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvpbz" Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.912021 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjg4j\" (UniqueName: \"kubernetes.io/projected/dff8da18-59aa-4616-b217-66f739215534-kube-api-access-xjg4j\") pod \"dnsmasq-dns-57d769cc4f-qvpbz\" (UID: \"dff8da18-59aa-4616-b217-66f739215534\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvpbz" Feb 02 13:16:50 crc kubenswrapper[4955]: I0202 13:16:50.990797 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qvpbz" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.110976 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-m4f8x"] Feb 02 13:16:51 crc kubenswrapper[4955]: W0202 13:16:51.126696 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a56fa80_19ba_40de_a107_32577f31ed7a.slice/crio-6678b7aa7f0d35ab3fda2192b68c649f79b9d9099fc1ffa7e1c842ae89af991d WatchSource:0}: Error finding container 6678b7aa7f0d35ab3fda2192b68c649f79b9d9099fc1ffa7e1c842ae89af991d: Status 404 returned error can't find the container with id 6678b7aa7f0d35ab3fda2192b68c649f79b9d9099fc1ffa7e1c842ae89af991d Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.253850 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qvpbz"] Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.373391 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.375021 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.376873 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-x5n8j" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.377077 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.377482 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.377665 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.377869 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.378103 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.378305 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.384131 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.500411 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.500486 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/60f684bd-051c-4608-8c11-1058cd2d6a01-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.500546 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/60f684bd-051c-4608-8c11-1058cd2d6a01-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.500635 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/60f684bd-051c-4608-8c11-1058cd2d6a01-server-conf\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.500667 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/60f684bd-051c-4608-8c11-1058cd2d6a01-config-data\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.500685 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.500760 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/60f684bd-051c-4608-8c11-1058cd2d6a01-pod-info\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.500793 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.500821 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-445hq\" (UniqueName: \"kubernetes.io/projected/60f684bd-051c-4608-8c11-1058cd2d6a01-kube-api-access-445hq\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.500849 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.500866 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.607673 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/60f684bd-051c-4608-8c11-1058cd2d6a01-pod-info\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.608474 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.608529 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-445hq\" (UniqueName: \"kubernetes.io/projected/60f684bd-051c-4608-8c11-1058cd2d6a01-kube-api-access-445hq\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.608610 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.608646 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.608684 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.608754 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/60f684bd-051c-4608-8c11-1058cd2d6a01-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.608815 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/60f684bd-051c-4608-8c11-1058cd2d6a01-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.608844 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/60f684bd-051c-4608-8c11-1058cd2d6a01-server-conf\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.608866 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/60f684bd-051c-4608-8c11-1058cd2d6a01-config-data\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.608885 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.609349 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.609889 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.610630 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.614403 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/60f684bd-051c-4608-8c11-1058cd2d6a01-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.614623 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/60f684bd-051c-4608-8c11-1058cd2d6a01-server-conf\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.614679 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/60f684bd-051c-4608-8c11-1058cd2d6a01-config-data\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.618532 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.619528 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/60f684bd-051c-4608-8c11-1058cd2d6a01-pod-info\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.628125 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-445hq\" (UniqueName: \"kubernetes.io/projected/60f684bd-051c-4608-8c11-1058cd2d6a01-kube-api-access-445hq\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.637533 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.658399 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/60f684bd-051c-4608-8c11-1058cd2d6a01-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.664579 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.698689 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.705985 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.707885 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.719202 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.719523 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.719600 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ct2pn" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.719735 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.719775 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.719882 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.719906 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.744111 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.799918 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-m4f8x" event={"ID":"5a56fa80-19ba-40de-a107-32577f31ed7a","Type":"ContainerStarted","Data":"6678b7aa7f0d35ab3fda2192b68c649f79b9d9099fc1ffa7e1c842ae89af991d"} Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.801438 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qvpbz" event={"ID":"dff8da18-59aa-4616-b217-66f739215534","Type":"ContainerStarted","Data":"f1be7f1e39db8edfccc6bcb947980cb7d02475416bd812b43bf067bb3a672c22"} Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.813077 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.813138 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.813191 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.813219 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czcgr\" (UniqueName: \"kubernetes.io/projected/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-kube-api-access-czcgr\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.813249 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.813279 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.813306 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.813596 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.813743 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.813782 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.813810 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.916039 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.916579 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czcgr\" (UniqueName: \"kubernetes.io/projected/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-kube-api-access-czcgr\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.916621 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.916669 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.916700 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.916775 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.917097 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.917636 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.917764 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.917792 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.917817 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.917858 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.917883 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.918345 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.918471 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.918836 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.924474 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.931325 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.931714 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.932811 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.934646 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czcgr\" (UniqueName: \"kubernetes.io/projected/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-kube-api-access-czcgr\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.935815 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:51 crc kubenswrapper[4955]: I0202 13:16:51.980104 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.049392 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.324438 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.763325 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.764755 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.767155 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.767350 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.768007 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-6pg2f" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.772738 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.776868 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.790750 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.839036 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d77aebb6-5e14-4958-b762-6e1f2e2c236e-config-data-default\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.839182 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d77aebb6-5e14-4958-b762-6e1f2e2c236e-kolla-config\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.839211 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d77aebb6-5e14-4958-b762-6e1f2e2c236e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.840049 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqdnx\" (UniqueName: \"kubernetes.io/projected/d77aebb6-5e14-4958-b762-6e1f2e2c236e-kube-api-access-hqdnx\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.840123 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d77aebb6-5e14-4958-b762-6e1f2e2c236e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.840190 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d77aebb6-5e14-4958-b762-6e1f2e2c236e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.840241 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.840272 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d77aebb6-5e14-4958-b762-6e1f2e2c236e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.856525 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"60f684bd-051c-4608-8c11-1058cd2d6a01","Type":"ContainerStarted","Data":"506df5a909d842b35472b2c72ae9f3a9941b49d26f97a552aff8b740a39ee593"} Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.942308 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.942674 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d77aebb6-5e14-4958-b762-6e1f2e2c236e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.942899 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d77aebb6-5e14-4958-b762-6e1f2e2c236e-config-data-default\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.943003 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d77aebb6-5e14-4958-b762-6e1f2e2c236e-kolla-config\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.943041 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqdnx\" (UniqueName: \"kubernetes.io/projected/d77aebb6-5e14-4958-b762-6e1f2e2c236e-kube-api-access-hqdnx\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.943066 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d77aebb6-5e14-4958-b762-6e1f2e2c236e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.943145 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d77aebb6-5e14-4958-b762-6e1f2e2c236e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.943205 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d77aebb6-5e14-4958-b762-6e1f2e2c236e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.943281 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.943795 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d77aebb6-5e14-4958-b762-6e1f2e2c236e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.944050 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d77aebb6-5e14-4958-b762-6e1f2e2c236e-kolla-config\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.945548 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d77aebb6-5e14-4958-b762-6e1f2e2c236e-config-data-default\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.950330 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d77aebb6-5e14-4958-b762-6e1f2e2c236e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.971518 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d77aebb6-5e14-4958-b762-6e1f2e2c236e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.971537 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d77aebb6-5e14-4958-b762-6e1f2e2c236e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.986359 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqdnx\" (UniqueName: \"kubernetes.io/projected/d77aebb6-5e14-4958-b762-6e1f2e2c236e-kube-api-access-hqdnx\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:52 crc kubenswrapper[4955]: I0202 13:16:52.992855 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"d77aebb6-5e14-4958-b762-6e1f2e2c236e\") " pod="openstack/openstack-galera-0" Feb 02 13:16:53 crc kubenswrapper[4955]: I0202 13:16:53.037228 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 13:16:53 crc kubenswrapper[4955]: W0202 13:16:53.101401 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod827225b6_1672_40b1_a9ee_7dd2d5db2d1d.slice/crio-58f987034d906973bb20497c3fb808c9bd1776be3030461079d7dc94c82723ce WatchSource:0}: Error finding container 58f987034d906973bb20497c3fb808c9bd1776be3030461079d7dc94c82723ce: Status 404 returned error can't find the container with id 58f987034d906973bb20497c3fb808c9bd1776be3030461079d7dc94c82723ce Feb 02 13:16:53 crc kubenswrapper[4955]: I0202 13:16:53.121639 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 13:16:53 crc kubenswrapper[4955]: I0202 13:16:53.738500 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 13:16:53 crc kubenswrapper[4955]: I0202 13:16:53.902234 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"827225b6-1672-40b1-a9ee-7dd2d5db2d1d","Type":"ContainerStarted","Data":"58f987034d906973bb20497c3fb808c9bd1776be3030461079d7dc94c82723ce"} Feb 02 13:16:53 crc kubenswrapper[4955]: I0202 13:16:53.905436 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d77aebb6-5e14-4958-b762-6e1f2e2c236e","Type":"ContainerStarted","Data":"db21349f6c6fb7e46c07f52e2fc9dbec90db2ae55b768b5273c9d942bd3c0e3b"} Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.120508 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.122003 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.128531 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.129509 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.129723 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.135685 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-2llrl" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.140400 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.284926 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3dda1e4-d043-4acc-ba59-2c64762956be-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.284993 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d3dda1e4-d043-4acc-ba59-2c64762956be-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.285022 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d3dda1e4-d043-4acc-ba59-2c64762956be-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.285065 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d3dda1e4-d043-4acc-ba59-2c64762956be-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.285087 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.285102 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjqdz\" (UniqueName: \"kubernetes.io/projected/d3dda1e4-d043-4acc-ba59-2c64762956be-kube-api-access-gjqdz\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.285119 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3dda1e4-d043-4acc-ba59-2c64762956be-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.285527 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3dda1e4-d043-4acc-ba59-2c64762956be-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.362328 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.365813 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.370820 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-vf6sm" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.370839 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.370899 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.373774 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.414347 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3dda1e4-d043-4acc-ba59-2c64762956be-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.414482 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d3dda1e4-d043-4acc-ba59-2c64762956be-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.414546 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d3dda1e4-d043-4acc-ba59-2c64762956be-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.414663 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d3dda1e4-d043-4acc-ba59-2c64762956be-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.414705 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.414725 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjqdz\" (UniqueName: \"kubernetes.io/projected/d3dda1e4-d043-4acc-ba59-2c64762956be-kube-api-access-gjqdz\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.414754 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3dda1e4-d043-4acc-ba59-2c64762956be-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.414861 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3dda1e4-d043-4acc-ba59-2c64762956be-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.448548 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.468063 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d3dda1e4-d043-4acc-ba59-2c64762956be-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.470256 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d3dda1e4-d043-4acc-ba59-2c64762956be-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.470471 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d3dda1e4-d043-4acc-ba59-2c64762956be-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.477308 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3dda1e4-d043-4acc-ba59-2c64762956be-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.484633 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3dda1e4-d043-4acc-ba59-2c64762956be-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.486000 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3dda1e4-d043-4acc-ba59-2c64762956be-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.487433 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjqdz\" (UniqueName: \"kubernetes.io/projected/d3dda1e4-d043-4acc-ba59-2c64762956be-kube-api-access-gjqdz\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.517241 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f33de66-48c8-4dfb-954d-bf70e5791e04-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4f33de66-48c8-4dfb-954d-bf70e5791e04\") " pod="openstack/memcached-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.517639 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f33de66-48c8-4dfb-954d-bf70e5791e04-config-data\") pod \"memcached-0\" (UID: \"4f33de66-48c8-4dfb-954d-bf70e5791e04\") " pod="openstack/memcached-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.517908 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzz99\" (UniqueName: \"kubernetes.io/projected/4f33de66-48c8-4dfb-954d-bf70e5791e04-kube-api-access-xzz99\") pod \"memcached-0\" (UID: \"4f33de66-48c8-4dfb-954d-bf70e5791e04\") " pod="openstack/memcached-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.518044 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f33de66-48c8-4dfb-954d-bf70e5791e04-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4f33de66-48c8-4dfb-954d-bf70e5791e04\") " pod="openstack/memcached-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.518172 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f33de66-48c8-4dfb-954d-bf70e5791e04-kolla-config\") pod \"memcached-0\" (UID: \"4f33de66-48c8-4dfb-954d-bf70e5791e04\") " pod="openstack/memcached-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.558817 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d3dda1e4-d043-4acc-ba59-2c64762956be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.620066 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f33de66-48c8-4dfb-954d-bf70e5791e04-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4f33de66-48c8-4dfb-954d-bf70e5791e04\") " pod="openstack/memcached-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.620177 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f33de66-48c8-4dfb-954d-bf70e5791e04-config-data\") pod \"memcached-0\" (UID: \"4f33de66-48c8-4dfb-954d-bf70e5791e04\") " pod="openstack/memcached-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.620216 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzz99\" (UniqueName: \"kubernetes.io/projected/4f33de66-48c8-4dfb-954d-bf70e5791e04-kube-api-access-xzz99\") pod \"memcached-0\" (UID: \"4f33de66-48c8-4dfb-954d-bf70e5791e04\") " pod="openstack/memcached-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.620266 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f33de66-48c8-4dfb-954d-bf70e5791e04-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4f33de66-48c8-4dfb-954d-bf70e5791e04\") " pod="openstack/memcached-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.620305 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f33de66-48c8-4dfb-954d-bf70e5791e04-kolla-config\") pod \"memcached-0\" (UID: \"4f33de66-48c8-4dfb-954d-bf70e5791e04\") " pod="openstack/memcached-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.621290 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f33de66-48c8-4dfb-954d-bf70e5791e04-kolla-config\") pod \"memcached-0\" (UID: \"4f33de66-48c8-4dfb-954d-bf70e5791e04\") " pod="openstack/memcached-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.625363 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f33de66-48c8-4dfb-954d-bf70e5791e04-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4f33de66-48c8-4dfb-954d-bf70e5791e04\") " pod="openstack/memcached-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.631974 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f33de66-48c8-4dfb-954d-bf70e5791e04-config-data\") pod \"memcached-0\" (UID: \"4f33de66-48c8-4dfb-954d-bf70e5791e04\") " pod="openstack/memcached-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.634876 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f33de66-48c8-4dfb-954d-bf70e5791e04-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4f33de66-48c8-4dfb-954d-bf70e5791e04\") " pod="openstack/memcached-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.648696 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzz99\" (UniqueName: \"kubernetes.io/projected/4f33de66-48c8-4dfb-954d-bf70e5791e04-kube-api-access-xzz99\") pod \"memcached-0\" (UID: \"4f33de66-48c8-4dfb-954d-bf70e5791e04\") " pod="openstack/memcached-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.764873 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 13:16:54 crc kubenswrapper[4955]: I0202 13:16:54.767421 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 13:16:55 crc kubenswrapper[4955]: I0202 13:16:55.298371 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 13:16:55 crc kubenswrapper[4955]: I0202 13:16:55.428905 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 13:16:56 crc kubenswrapper[4955]: I0202 13:16:56.605938 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 13:16:56 crc kubenswrapper[4955]: I0202 13:16:56.607757 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 13:16:56 crc kubenswrapper[4955]: I0202 13:16:56.611811 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-2g268" Feb 02 13:16:56 crc kubenswrapper[4955]: I0202 13:16:56.629819 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 13:16:56 crc kubenswrapper[4955]: I0202 13:16:56.771747 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzxtb\" (UniqueName: \"kubernetes.io/projected/4a5d9947-5c29-430e-b975-78024809faed-kube-api-access-dzxtb\") pod \"kube-state-metrics-0\" (UID: \"4a5d9947-5c29-430e-b975-78024809faed\") " pod="openstack/kube-state-metrics-0" Feb 02 13:16:56 crc kubenswrapper[4955]: I0202 13:16:56.873528 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzxtb\" (UniqueName: \"kubernetes.io/projected/4a5d9947-5c29-430e-b975-78024809faed-kube-api-access-dzxtb\") pod \"kube-state-metrics-0\" (UID: \"4a5d9947-5c29-430e-b975-78024809faed\") " pod="openstack/kube-state-metrics-0" Feb 02 13:16:56 crc kubenswrapper[4955]: I0202 13:16:56.897636 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzxtb\" (UniqueName: \"kubernetes.io/projected/4a5d9947-5c29-430e-b975-78024809faed-kube-api-access-dzxtb\") pod \"kube-state-metrics-0\" (UID: \"4a5d9947-5c29-430e-b975-78024809faed\") " pod="openstack/kube-state-metrics-0" Feb 02 13:16:56 crc kubenswrapper[4955]: I0202 13:16:56.937496 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.736363 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-p7twj"] Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.738733 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p7twj" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.741446 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.741694 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-94mhz" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.741828 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.753717 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-p7twj"] Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.822526 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-h5j4t"] Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.824955 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/480861f6-44ea-41c3-806e-497f3177eb91-scripts\") pod \"ovn-controller-p7twj\" (UID: \"480861f6-44ea-41c3-806e-497f3177eb91\") " pod="openstack/ovn-controller-p7twj" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.825030 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/480861f6-44ea-41c3-806e-497f3177eb91-ovn-controller-tls-certs\") pod \"ovn-controller-p7twj\" (UID: \"480861f6-44ea-41c3-806e-497f3177eb91\") " pod="openstack/ovn-controller-p7twj" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.825049 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/480861f6-44ea-41c3-806e-497f3177eb91-var-run\") pod \"ovn-controller-p7twj\" (UID: \"480861f6-44ea-41c3-806e-497f3177eb91\") " pod="openstack/ovn-controller-p7twj" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.825064 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/480861f6-44ea-41c3-806e-497f3177eb91-var-run-ovn\") pod \"ovn-controller-p7twj\" (UID: \"480861f6-44ea-41c3-806e-497f3177eb91\") " pod="openstack/ovn-controller-p7twj" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.825096 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vdrf\" (UniqueName: \"kubernetes.io/projected/480861f6-44ea-41c3-806e-497f3177eb91-kube-api-access-6vdrf\") pod \"ovn-controller-p7twj\" (UID: \"480861f6-44ea-41c3-806e-497f3177eb91\") " pod="openstack/ovn-controller-p7twj" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.825122 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/480861f6-44ea-41c3-806e-497f3177eb91-combined-ca-bundle\") pod \"ovn-controller-p7twj\" (UID: \"480861f6-44ea-41c3-806e-497f3177eb91\") " pod="openstack/ovn-controller-p7twj" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.825144 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/480861f6-44ea-41c3-806e-497f3177eb91-var-log-ovn\") pod \"ovn-controller-p7twj\" (UID: \"480861f6-44ea-41c3-806e-497f3177eb91\") " pod="openstack/ovn-controller-p7twj" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.827590 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.854535 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-h5j4t"] Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.926782 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/480861f6-44ea-41c3-806e-497f3177eb91-scripts\") pod \"ovn-controller-p7twj\" (UID: \"480861f6-44ea-41c3-806e-497f3177eb91\") " pod="openstack/ovn-controller-p7twj" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.926861 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/480861f6-44ea-41c3-806e-497f3177eb91-ovn-controller-tls-certs\") pod \"ovn-controller-p7twj\" (UID: \"480861f6-44ea-41c3-806e-497f3177eb91\") " pod="openstack/ovn-controller-p7twj" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.926888 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/480861f6-44ea-41c3-806e-497f3177eb91-var-run\") pod \"ovn-controller-p7twj\" (UID: \"480861f6-44ea-41c3-806e-497f3177eb91\") " pod="openstack/ovn-controller-p7twj" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.926910 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/480861f6-44ea-41c3-806e-497f3177eb91-var-run-ovn\") pod \"ovn-controller-p7twj\" (UID: \"480861f6-44ea-41c3-806e-497f3177eb91\") " pod="openstack/ovn-controller-p7twj" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.926955 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj5sk\" (UniqueName: \"kubernetes.io/projected/b42e4a2b-d820-45ba-afdf-ab9e0a6787a9-kube-api-access-vj5sk\") pod \"ovn-controller-ovs-h5j4t\" (UID: \"b42e4a2b-d820-45ba-afdf-ab9e0a6787a9\") " pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.926999 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vdrf\" (UniqueName: \"kubernetes.io/projected/480861f6-44ea-41c3-806e-497f3177eb91-kube-api-access-6vdrf\") pod \"ovn-controller-p7twj\" (UID: \"480861f6-44ea-41c3-806e-497f3177eb91\") " pod="openstack/ovn-controller-p7twj" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.927043 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b42e4a2b-d820-45ba-afdf-ab9e0a6787a9-etc-ovs\") pod \"ovn-controller-ovs-h5j4t\" (UID: \"b42e4a2b-d820-45ba-afdf-ab9e0a6787a9\") " pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.927063 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b42e4a2b-d820-45ba-afdf-ab9e0a6787a9-scripts\") pod \"ovn-controller-ovs-h5j4t\" (UID: \"b42e4a2b-d820-45ba-afdf-ab9e0a6787a9\") " pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.927083 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/480861f6-44ea-41c3-806e-497f3177eb91-combined-ca-bundle\") pod \"ovn-controller-p7twj\" (UID: \"480861f6-44ea-41c3-806e-497f3177eb91\") " pod="openstack/ovn-controller-p7twj" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.927127 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b42e4a2b-d820-45ba-afdf-ab9e0a6787a9-var-log\") pod \"ovn-controller-ovs-h5j4t\" (UID: \"b42e4a2b-d820-45ba-afdf-ab9e0a6787a9\") " pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.927175 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b42e4a2b-d820-45ba-afdf-ab9e0a6787a9-var-run\") pod \"ovn-controller-ovs-h5j4t\" (UID: \"b42e4a2b-d820-45ba-afdf-ab9e0a6787a9\") " pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.927200 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/480861f6-44ea-41c3-806e-497f3177eb91-var-log-ovn\") pod \"ovn-controller-p7twj\" (UID: \"480861f6-44ea-41c3-806e-497f3177eb91\") " pod="openstack/ovn-controller-p7twj" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.927254 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b42e4a2b-d820-45ba-afdf-ab9e0a6787a9-var-lib\") pod \"ovn-controller-ovs-h5j4t\" (UID: \"b42e4a2b-d820-45ba-afdf-ab9e0a6787a9\") " pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.928527 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/480861f6-44ea-41c3-806e-497f3177eb91-var-run\") pod \"ovn-controller-p7twj\" (UID: \"480861f6-44ea-41c3-806e-497f3177eb91\") " pod="openstack/ovn-controller-p7twj" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.928652 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/480861f6-44ea-41c3-806e-497f3177eb91-var-run-ovn\") pod \"ovn-controller-p7twj\" (UID: \"480861f6-44ea-41c3-806e-497f3177eb91\") " pod="openstack/ovn-controller-p7twj" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.928916 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/480861f6-44ea-41c3-806e-497f3177eb91-var-log-ovn\") pod \"ovn-controller-p7twj\" (UID: \"480861f6-44ea-41c3-806e-497f3177eb91\") " pod="openstack/ovn-controller-p7twj" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.929390 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/480861f6-44ea-41c3-806e-497f3177eb91-scripts\") pod \"ovn-controller-p7twj\" (UID: \"480861f6-44ea-41c3-806e-497f3177eb91\") " pod="openstack/ovn-controller-p7twj" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.934456 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/480861f6-44ea-41c3-806e-497f3177eb91-ovn-controller-tls-certs\") pod \"ovn-controller-p7twj\" (UID: \"480861f6-44ea-41c3-806e-497f3177eb91\") " pod="openstack/ovn-controller-p7twj" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.939814 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/480861f6-44ea-41c3-806e-497f3177eb91-combined-ca-bundle\") pod \"ovn-controller-p7twj\" (UID: \"480861f6-44ea-41c3-806e-497f3177eb91\") " pod="openstack/ovn-controller-p7twj" Feb 02 13:16:59 crc kubenswrapper[4955]: I0202 13:16:59.959277 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vdrf\" (UniqueName: \"kubernetes.io/projected/480861f6-44ea-41c3-806e-497f3177eb91-kube-api-access-6vdrf\") pod \"ovn-controller-p7twj\" (UID: \"480861f6-44ea-41c3-806e-497f3177eb91\") " pod="openstack/ovn-controller-p7twj" Feb 02 13:17:00 crc kubenswrapper[4955]: I0202 13:17:00.028677 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj5sk\" (UniqueName: \"kubernetes.io/projected/b42e4a2b-d820-45ba-afdf-ab9e0a6787a9-kube-api-access-vj5sk\") pod \"ovn-controller-ovs-h5j4t\" (UID: \"b42e4a2b-d820-45ba-afdf-ab9e0a6787a9\") " pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:17:00 crc kubenswrapper[4955]: I0202 13:17:00.028774 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b42e4a2b-d820-45ba-afdf-ab9e0a6787a9-etc-ovs\") pod \"ovn-controller-ovs-h5j4t\" (UID: \"b42e4a2b-d820-45ba-afdf-ab9e0a6787a9\") " pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:17:00 crc kubenswrapper[4955]: I0202 13:17:00.028791 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b42e4a2b-d820-45ba-afdf-ab9e0a6787a9-scripts\") pod \"ovn-controller-ovs-h5j4t\" (UID: \"b42e4a2b-d820-45ba-afdf-ab9e0a6787a9\") " pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:17:00 crc kubenswrapper[4955]: I0202 13:17:00.028814 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b42e4a2b-d820-45ba-afdf-ab9e0a6787a9-var-log\") pod \"ovn-controller-ovs-h5j4t\" (UID: \"b42e4a2b-d820-45ba-afdf-ab9e0a6787a9\") " pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:17:00 crc kubenswrapper[4955]: I0202 13:17:00.028834 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b42e4a2b-d820-45ba-afdf-ab9e0a6787a9-var-run\") pod \"ovn-controller-ovs-h5j4t\" (UID: \"b42e4a2b-d820-45ba-afdf-ab9e0a6787a9\") " pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:17:00 crc kubenswrapper[4955]: I0202 13:17:00.028874 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b42e4a2b-d820-45ba-afdf-ab9e0a6787a9-var-lib\") pod \"ovn-controller-ovs-h5j4t\" (UID: \"b42e4a2b-d820-45ba-afdf-ab9e0a6787a9\") " pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:17:00 crc kubenswrapper[4955]: I0202 13:17:00.029144 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b42e4a2b-d820-45ba-afdf-ab9e0a6787a9-etc-ovs\") pod \"ovn-controller-ovs-h5j4t\" (UID: \"b42e4a2b-d820-45ba-afdf-ab9e0a6787a9\") " pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:17:00 crc kubenswrapper[4955]: I0202 13:17:00.029194 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b42e4a2b-d820-45ba-afdf-ab9e0a6787a9-var-lib\") pod \"ovn-controller-ovs-h5j4t\" (UID: \"b42e4a2b-d820-45ba-afdf-ab9e0a6787a9\") " pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:17:00 crc kubenswrapper[4955]: I0202 13:17:00.029235 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b42e4a2b-d820-45ba-afdf-ab9e0a6787a9-var-run\") pod \"ovn-controller-ovs-h5j4t\" (UID: \"b42e4a2b-d820-45ba-afdf-ab9e0a6787a9\") " pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:17:00 crc kubenswrapper[4955]: I0202 13:17:00.029288 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b42e4a2b-d820-45ba-afdf-ab9e0a6787a9-var-log\") pod \"ovn-controller-ovs-h5j4t\" (UID: \"b42e4a2b-d820-45ba-afdf-ab9e0a6787a9\") " pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:17:00 crc kubenswrapper[4955]: I0202 13:17:00.031531 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b42e4a2b-d820-45ba-afdf-ab9e0a6787a9-scripts\") pod \"ovn-controller-ovs-h5j4t\" (UID: \"b42e4a2b-d820-45ba-afdf-ab9e0a6787a9\") " pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:17:00 crc kubenswrapper[4955]: I0202 13:17:00.047297 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj5sk\" (UniqueName: \"kubernetes.io/projected/b42e4a2b-d820-45ba-afdf-ab9e0a6787a9-kube-api-access-vj5sk\") pod \"ovn-controller-ovs-h5j4t\" (UID: \"b42e4a2b-d820-45ba-afdf-ab9e0a6787a9\") " pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:17:00 crc kubenswrapper[4955]: I0202 13:17:00.059621 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p7twj" Feb 02 13:17:00 crc kubenswrapper[4955]: I0202 13:17:00.155196 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:17:00 crc kubenswrapper[4955]: W0202 13:17:00.389785 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f33de66_48c8_4dfb_954d_bf70e5791e04.slice/crio-ea2b3e4d7f77e2d7153233ce1b462af8b919da8283ae0a23e5dd77fceda20e17 WatchSource:0}: Error finding container ea2b3e4d7f77e2d7153233ce1b462af8b919da8283ae0a23e5dd77fceda20e17: Status 404 returned error can't find the container with id ea2b3e4d7f77e2d7153233ce1b462af8b919da8283ae0a23e5dd77fceda20e17 Feb 02 13:17:01 crc kubenswrapper[4955]: I0202 13:17:01.005954 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d3dda1e4-d043-4acc-ba59-2c64762956be","Type":"ContainerStarted","Data":"7d48f48e35e6c30e8c85bed929b590c524e28f0a89aba7e1dec75bb11fd03312"} Feb 02 13:17:01 crc kubenswrapper[4955]: I0202 13:17:01.008250 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4f33de66-48c8-4dfb-954d-bf70e5791e04","Type":"ContainerStarted","Data":"ea2b3e4d7f77e2d7153233ce1b462af8b919da8283ae0a23e5dd77fceda20e17"} Feb 02 13:17:01 crc kubenswrapper[4955]: I0202 13:17:01.851903 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 13:17:01 crc kubenswrapper[4955]: I0202 13:17:01.855154 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:01 crc kubenswrapper[4955]: I0202 13:17:01.858248 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-zgdnn" Feb 02 13:17:01 crc kubenswrapper[4955]: I0202 13:17:01.858429 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 02 13:17:01 crc kubenswrapper[4955]: I0202 13:17:01.858611 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 02 13:17:01 crc kubenswrapper[4955]: I0202 13:17:01.858739 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 02 13:17:01 crc kubenswrapper[4955]: I0202 13:17:01.859577 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 02 13:17:01 crc kubenswrapper[4955]: I0202 13:17:01.860300 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 13:17:01 crc kubenswrapper[4955]: I0202 13:17:01.973241 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2856fe37-3113-44d2-ac52-f28f9d5aba38-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:01 crc kubenswrapper[4955]: I0202 13:17:01.973298 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2856fe37-3113-44d2-ac52-f28f9d5aba38-config\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:01 crc kubenswrapper[4955]: I0202 13:17:01.973324 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2856fe37-3113-44d2-ac52-f28f9d5aba38-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:01 crc kubenswrapper[4955]: I0202 13:17:01.973350 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2856fe37-3113-44d2-ac52-f28f9d5aba38-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:01 crc kubenswrapper[4955]: I0202 13:17:01.973389 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2856fe37-3113-44d2-ac52-f28f9d5aba38-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:01 crc kubenswrapper[4955]: I0202 13:17:01.973587 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2856fe37-3113-44d2-ac52-f28f9d5aba38-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:01 crc kubenswrapper[4955]: I0202 13:17:01.973744 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:01 crc kubenswrapper[4955]: I0202 13:17:01.973795 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-572dd\" (UniqueName: \"kubernetes.io/projected/2856fe37-3113-44d2-ac52-f28f9d5aba38-kube-api-access-572dd\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:02 crc kubenswrapper[4955]: I0202 13:17:02.077654 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-572dd\" (UniqueName: \"kubernetes.io/projected/2856fe37-3113-44d2-ac52-f28f9d5aba38-kube-api-access-572dd\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:02 crc kubenswrapper[4955]: I0202 13:17:02.077752 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2856fe37-3113-44d2-ac52-f28f9d5aba38-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:02 crc kubenswrapper[4955]: I0202 13:17:02.077780 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2856fe37-3113-44d2-ac52-f28f9d5aba38-config\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:02 crc kubenswrapper[4955]: I0202 13:17:02.077810 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2856fe37-3113-44d2-ac52-f28f9d5aba38-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:02 crc kubenswrapper[4955]: I0202 13:17:02.077838 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2856fe37-3113-44d2-ac52-f28f9d5aba38-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:02 crc kubenswrapper[4955]: I0202 13:17:02.077880 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2856fe37-3113-44d2-ac52-f28f9d5aba38-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:02 crc kubenswrapper[4955]: I0202 13:17:02.077911 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2856fe37-3113-44d2-ac52-f28f9d5aba38-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:02 crc kubenswrapper[4955]: I0202 13:17:02.077944 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:02 crc kubenswrapper[4955]: I0202 13:17:02.078346 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:02 crc kubenswrapper[4955]: I0202 13:17:02.079019 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2856fe37-3113-44d2-ac52-f28f9d5aba38-config\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:02 crc kubenswrapper[4955]: I0202 13:17:02.079214 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2856fe37-3113-44d2-ac52-f28f9d5aba38-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:02 crc kubenswrapper[4955]: I0202 13:17:02.080286 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2856fe37-3113-44d2-ac52-f28f9d5aba38-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:02 crc kubenswrapper[4955]: I0202 13:17:02.084109 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2856fe37-3113-44d2-ac52-f28f9d5aba38-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:02 crc kubenswrapper[4955]: I0202 13:17:02.094578 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2856fe37-3113-44d2-ac52-f28f9d5aba38-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:02 crc kubenswrapper[4955]: I0202 13:17:02.097379 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-572dd\" (UniqueName: \"kubernetes.io/projected/2856fe37-3113-44d2-ac52-f28f9d5aba38-kube-api-access-572dd\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:02 crc kubenswrapper[4955]: I0202 13:17:02.098331 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2856fe37-3113-44d2-ac52-f28f9d5aba38-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:02 crc kubenswrapper[4955]: I0202 13:17:02.105967 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2856fe37-3113-44d2-ac52-f28f9d5aba38\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:02 crc kubenswrapper[4955]: I0202 13:17:02.198067 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.589748 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.592498 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.595073 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-gcgr7" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.595202 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.596024 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.596177 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.601112 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.706370 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43af4b7b-306d-4c1d-9947-f4749eeed848-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.706431 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43af4b7b-306d-4c1d-9947-f4749eeed848-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.706656 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pfjf\" (UniqueName: \"kubernetes.io/projected/43af4b7b-306d-4c1d-9947-f4749eeed848-kube-api-access-5pfjf\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.706897 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43af4b7b-306d-4c1d-9947-f4749eeed848-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.706953 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.707011 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/43af4b7b-306d-4c1d-9947-f4749eeed848-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.707143 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43af4b7b-306d-4c1d-9947-f4749eeed848-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.707257 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43af4b7b-306d-4c1d-9947-f4749eeed848-config\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.809124 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43af4b7b-306d-4c1d-9947-f4749eeed848-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.809165 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.809189 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/43af4b7b-306d-4c1d-9947-f4749eeed848-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.809228 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43af4b7b-306d-4c1d-9947-f4749eeed848-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.809263 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43af4b7b-306d-4c1d-9947-f4749eeed848-config\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.809295 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43af4b7b-306d-4c1d-9947-f4749eeed848-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.809316 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43af4b7b-306d-4c1d-9947-f4749eeed848-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.809376 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pfjf\" (UniqueName: \"kubernetes.io/projected/43af4b7b-306d-4c1d-9947-f4749eeed848-kube-api-access-5pfjf\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.809484 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.810313 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43af4b7b-306d-4c1d-9947-f4749eeed848-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.811000 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43af4b7b-306d-4c1d-9947-f4749eeed848-config\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.811037 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/43af4b7b-306d-4c1d-9947-f4749eeed848-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.815358 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43af4b7b-306d-4c1d-9947-f4749eeed848-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.823396 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43af4b7b-306d-4c1d-9947-f4749eeed848-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.826248 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43af4b7b-306d-4c1d-9947-f4749eeed848-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.826584 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pfjf\" (UniqueName: \"kubernetes.io/projected/43af4b7b-306d-4c1d-9947-f4749eeed848-kube-api-access-5pfjf\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.829979 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"43af4b7b-306d-4c1d-9947-f4749eeed848\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:03 crc kubenswrapper[4955]: I0202 13:17:03.919025 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:07 crc kubenswrapper[4955]: I0202 13:17:07.189792 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qkf4x"] Feb 02 13:17:07 crc kubenswrapper[4955]: I0202 13:17:07.191809 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qkf4x" Feb 02 13:17:07 crc kubenswrapper[4955]: I0202 13:17:07.214607 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qkf4x"] Feb 02 13:17:07 crc kubenswrapper[4955]: I0202 13:17:07.268682 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda30c6d-d71c-417b-8434-ff87281d64c7-utilities\") pod \"certified-operators-qkf4x\" (UID: \"eda30c6d-d71c-417b-8434-ff87281d64c7\") " pod="openshift-marketplace/certified-operators-qkf4x" Feb 02 13:17:07 crc kubenswrapper[4955]: I0202 13:17:07.268796 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6xlt\" (UniqueName: \"kubernetes.io/projected/eda30c6d-d71c-417b-8434-ff87281d64c7-kube-api-access-s6xlt\") pod \"certified-operators-qkf4x\" (UID: \"eda30c6d-d71c-417b-8434-ff87281d64c7\") " pod="openshift-marketplace/certified-operators-qkf4x" Feb 02 13:17:07 crc kubenswrapper[4955]: I0202 13:17:07.268820 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda30c6d-d71c-417b-8434-ff87281d64c7-catalog-content\") pod \"certified-operators-qkf4x\" (UID: \"eda30c6d-d71c-417b-8434-ff87281d64c7\") " pod="openshift-marketplace/certified-operators-qkf4x" Feb 02 13:17:07 crc kubenswrapper[4955]: I0202 13:17:07.370733 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6xlt\" (UniqueName: \"kubernetes.io/projected/eda30c6d-d71c-417b-8434-ff87281d64c7-kube-api-access-s6xlt\") pod \"certified-operators-qkf4x\" (UID: \"eda30c6d-d71c-417b-8434-ff87281d64c7\") " pod="openshift-marketplace/certified-operators-qkf4x" Feb 02 13:17:07 crc kubenswrapper[4955]: I0202 13:17:07.370795 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda30c6d-d71c-417b-8434-ff87281d64c7-catalog-content\") pod \"certified-operators-qkf4x\" (UID: \"eda30c6d-d71c-417b-8434-ff87281d64c7\") " pod="openshift-marketplace/certified-operators-qkf4x" Feb 02 13:17:07 crc kubenswrapper[4955]: I0202 13:17:07.370837 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda30c6d-d71c-417b-8434-ff87281d64c7-utilities\") pod \"certified-operators-qkf4x\" (UID: \"eda30c6d-d71c-417b-8434-ff87281d64c7\") " pod="openshift-marketplace/certified-operators-qkf4x" Feb 02 13:17:07 crc kubenswrapper[4955]: I0202 13:17:07.371447 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda30c6d-d71c-417b-8434-ff87281d64c7-utilities\") pod \"certified-operators-qkf4x\" (UID: \"eda30c6d-d71c-417b-8434-ff87281d64c7\") " pod="openshift-marketplace/certified-operators-qkf4x" Feb 02 13:17:07 crc kubenswrapper[4955]: I0202 13:17:07.371513 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda30c6d-d71c-417b-8434-ff87281d64c7-catalog-content\") pod \"certified-operators-qkf4x\" (UID: \"eda30c6d-d71c-417b-8434-ff87281d64c7\") " pod="openshift-marketplace/certified-operators-qkf4x" Feb 02 13:17:07 crc kubenswrapper[4955]: I0202 13:17:07.396752 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6xlt\" (UniqueName: \"kubernetes.io/projected/eda30c6d-d71c-417b-8434-ff87281d64c7-kube-api-access-s6xlt\") pod \"certified-operators-qkf4x\" (UID: \"eda30c6d-d71c-417b-8434-ff87281d64c7\") " pod="openshift-marketplace/certified-operators-qkf4x" Feb 02 13:17:07 crc kubenswrapper[4955]: I0202 13:17:07.569422 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qkf4x" Feb 02 13:17:08 crc kubenswrapper[4955]: E0202 13:17:08.430076 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 02 13:17:08 crc kubenswrapper[4955]: E0202 13:17:08.430408 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-czcgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(827225b6-1672-40b1-a9ee-7dd2d5db2d1d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:17:08 crc kubenswrapper[4955]: E0202 13:17:08.431726 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="827225b6-1672-40b1-a9ee-7dd2d5db2d1d" Feb 02 13:17:08 crc kubenswrapper[4955]: I0202 13:17:08.773748 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 13:17:09 crc kubenswrapper[4955]: E0202 13:17:09.081453 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="827225b6-1672-40b1-a9ee-7dd2d5db2d1d" Feb 02 13:17:17 crc kubenswrapper[4955]: I0202 13:17:17.204891 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zrnqt"] Feb 02 13:17:17 crc kubenswrapper[4955]: I0202 13:17:17.209246 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zrnqt" Feb 02 13:17:17 crc kubenswrapper[4955]: I0202 13:17:17.213670 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrnqt"] Feb 02 13:17:17 crc kubenswrapper[4955]: I0202 13:17:17.302683 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwcw5\" (UniqueName: \"kubernetes.io/projected/decb5fa1-a3bf-46fb-8662-116a233e9dc7-kube-api-access-wwcw5\") pod \"redhat-marketplace-zrnqt\" (UID: \"decb5fa1-a3bf-46fb-8662-116a233e9dc7\") " pod="openshift-marketplace/redhat-marketplace-zrnqt" Feb 02 13:17:17 crc kubenswrapper[4955]: I0202 13:17:17.303059 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decb5fa1-a3bf-46fb-8662-116a233e9dc7-catalog-content\") pod \"redhat-marketplace-zrnqt\" (UID: \"decb5fa1-a3bf-46fb-8662-116a233e9dc7\") " pod="openshift-marketplace/redhat-marketplace-zrnqt" Feb 02 13:17:17 crc kubenswrapper[4955]: I0202 13:17:17.303086 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decb5fa1-a3bf-46fb-8662-116a233e9dc7-utilities\") pod \"redhat-marketplace-zrnqt\" (UID: \"decb5fa1-a3bf-46fb-8662-116a233e9dc7\") " pod="openshift-marketplace/redhat-marketplace-zrnqt" Feb 02 13:17:17 crc kubenswrapper[4955]: I0202 13:17:17.404480 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwcw5\" (UniqueName: \"kubernetes.io/projected/decb5fa1-a3bf-46fb-8662-116a233e9dc7-kube-api-access-wwcw5\") pod \"redhat-marketplace-zrnqt\" (UID: \"decb5fa1-a3bf-46fb-8662-116a233e9dc7\") " pod="openshift-marketplace/redhat-marketplace-zrnqt" Feb 02 13:17:17 crc kubenswrapper[4955]: I0202 13:17:17.404583 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decb5fa1-a3bf-46fb-8662-116a233e9dc7-catalog-content\") pod \"redhat-marketplace-zrnqt\" (UID: \"decb5fa1-a3bf-46fb-8662-116a233e9dc7\") " pod="openshift-marketplace/redhat-marketplace-zrnqt" Feb 02 13:17:17 crc kubenswrapper[4955]: I0202 13:17:17.404612 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decb5fa1-a3bf-46fb-8662-116a233e9dc7-utilities\") pod \"redhat-marketplace-zrnqt\" (UID: \"decb5fa1-a3bf-46fb-8662-116a233e9dc7\") " pod="openshift-marketplace/redhat-marketplace-zrnqt" Feb 02 13:17:17 crc kubenswrapper[4955]: I0202 13:17:17.405117 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decb5fa1-a3bf-46fb-8662-116a233e9dc7-catalog-content\") pod \"redhat-marketplace-zrnqt\" (UID: \"decb5fa1-a3bf-46fb-8662-116a233e9dc7\") " pod="openshift-marketplace/redhat-marketplace-zrnqt" Feb 02 13:17:17 crc kubenswrapper[4955]: I0202 13:17:17.405150 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decb5fa1-a3bf-46fb-8662-116a233e9dc7-utilities\") pod \"redhat-marketplace-zrnqt\" (UID: \"decb5fa1-a3bf-46fb-8662-116a233e9dc7\") " pod="openshift-marketplace/redhat-marketplace-zrnqt" Feb 02 13:17:17 crc kubenswrapper[4955]: I0202 13:17:17.429630 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwcw5\" (UniqueName: \"kubernetes.io/projected/decb5fa1-a3bf-46fb-8662-116a233e9dc7-kube-api-access-wwcw5\") pod \"redhat-marketplace-zrnqt\" (UID: \"decb5fa1-a3bf-46fb-8662-116a233e9dc7\") " pod="openshift-marketplace/redhat-marketplace-zrnqt" Feb 02 13:17:17 crc kubenswrapper[4955]: I0202 13:17:17.562862 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zrnqt" Feb 02 13:17:18 crc kubenswrapper[4955]: I0202 13:17:18.136366 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4a5d9947-5c29-430e-b975-78024809faed","Type":"ContainerStarted","Data":"642cab9c69c8256242b451a2ef2f270cbb537bb61528b4a41bd7fd574054dd6b"} Feb 02 13:17:18 crc kubenswrapper[4955]: I0202 13:17:18.396689 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-h5j4t"] Feb 02 13:17:18 crc kubenswrapper[4955]: W0202 13:17:18.773182 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb42e4a2b_d820_45ba_afdf_ab9e0a6787a9.slice/crio-6afe41a2156a86c2e4b771e4ec0e94cbe5dd60cab0e18092b587edb59c21bae1 WatchSource:0}: Error finding container 6afe41a2156a86c2e4b771e4ec0e94cbe5dd60cab0e18092b587edb59c21bae1: Status 404 returned error can't find the container with id 6afe41a2156a86c2e4b771e4ec0e94cbe5dd60cab0e18092b587edb59c21bae1 Feb 02 13:17:18 crc kubenswrapper[4955]: E0202 13:17:18.820252 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 13:17:18 crc kubenswrapper[4955]: E0202 13:17:18.820469 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xjg4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-qvpbz_openstack(dff8da18-59aa-4616-b217-66f739215534): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:17:18 crc kubenswrapper[4955]: E0202 13:17:18.821638 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-qvpbz" podUID="dff8da18-59aa-4616-b217-66f739215534" Feb 02 13:17:18 crc kubenswrapper[4955]: E0202 13:17:18.854842 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 13:17:18 crc kubenswrapper[4955]: E0202 13:17:18.856181 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h2bv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-k7qvh_openstack(65604713-c812-403a-8f50-523cc35d4e30): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:17:18 crc kubenswrapper[4955]: E0202 13:17:18.857780 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-k7qvh" podUID="65604713-c812-403a-8f50-523cc35d4e30" Feb 02 13:17:18 crc kubenswrapper[4955]: E0202 13:17:18.879414 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 13:17:18 crc kubenswrapper[4955]: E0202 13:17:18.879594 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ldnw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-2zgdn_openstack(408d264a-b7ba-4686-a994-85c0a9385e40): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:17:18 crc kubenswrapper[4955]: E0202 13:17:18.884106 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-2zgdn" podUID="408d264a-b7ba-4686-a994-85c0a9385e40" Feb 02 13:17:18 crc kubenswrapper[4955]: E0202 13:17:18.899798 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 13:17:18 crc kubenswrapper[4955]: E0202 13:17:18.899985 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4bwwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-m4f8x_openstack(5a56fa80-19ba-40de-a107-32577f31ed7a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:17:18 crc kubenswrapper[4955]: E0202 13:17:18.903793 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-m4f8x" podUID="5a56fa80-19ba-40de-a107-32577f31ed7a" Feb 02 13:17:19 crc kubenswrapper[4955]: I0202 13:17:19.179187 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-h5j4t" event={"ID":"b42e4a2b-d820-45ba-afdf-ab9e0a6787a9","Type":"ContainerStarted","Data":"6afe41a2156a86c2e4b771e4ec0e94cbe5dd60cab0e18092b587edb59c21bae1"} Feb 02 13:17:19 crc kubenswrapper[4955]: E0202 13:17:19.186822 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-m4f8x" podUID="5a56fa80-19ba-40de-a107-32577f31ed7a" Feb 02 13:17:19 crc kubenswrapper[4955]: E0202 13:17:19.203994 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-qvpbz" podUID="dff8da18-59aa-4616-b217-66f739215534" Feb 02 13:17:19 crc kubenswrapper[4955]: I0202 13:17:19.261390 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-p7twj"] Feb 02 13:17:19 crc kubenswrapper[4955]: W0202 13:17:19.299111 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod480861f6_44ea_41c3_806e_497f3177eb91.slice/crio-96eb488b883d0d86d0df2520a0b92b7121a2c9ec012065d73908842957a34c45 WatchSource:0}: Error finding container 96eb488b883d0d86d0df2520a0b92b7121a2c9ec012065d73908842957a34c45: Status 404 returned error can't find the container with id 96eb488b883d0d86d0df2520a0b92b7121a2c9ec012065d73908842957a34c45 Feb 02 13:17:19 crc kubenswrapper[4955]: I0202 13:17:19.432473 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qkf4x"] Feb 02 13:17:19 crc kubenswrapper[4955]: I0202 13:17:19.572482 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 13:17:19 crc kubenswrapper[4955]: I0202 13:17:19.752713 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 13:17:19 crc kubenswrapper[4955]: I0202 13:17:19.760237 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrnqt"] Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.061118 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-k7qvh" Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.080918 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65604713-c812-403a-8f50-523cc35d4e30-config\") pod \"65604713-c812-403a-8f50-523cc35d4e30\" (UID: \"65604713-c812-403a-8f50-523cc35d4e30\") " Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.081101 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2bv9\" (UniqueName: \"kubernetes.io/projected/65604713-c812-403a-8f50-523cc35d4e30-kube-api-access-h2bv9\") pod \"65604713-c812-403a-8f50-523cc35d4e30\" (UID: \"65604713-c812-403a-8f50-523cc35d4e30\") " Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.082619 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65604713-c812-403a-8f50-523cc35d4e30-config" (OuterVolumeSpecName: "config") pod "65604713-c812-403a-8f50-523cc35d4e30" (UID: "65604713-c812-403a-8f50-523cc35d4e30"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.087262 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2zgdn" Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.088402 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65604713-c812-403a-8f50-523cc35d4e30-kube-api-access-h2bv9" (OuterVolumeSpecName: "kube-api-access-h2bv9") pod "65604713-c812-403a-8f50-523cc35d4e30" (UID: "65604713-c812-403a-8f50-523cc35d4e30"). InnerVolumeSpecName "kube-api-access-h2bv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.182597 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/408d264a-b7ba-4686-a994-85c0a9385e40-config\") pod \"408d264a-b7ba-4686-a994-85c0a9385e40\" (UID: \"408d264a-b7ba-4686-a994-85c0a9385e40\") " Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.182660 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ldnw\" (UniqueName: \"kubernetes.io/projected/408d264a-b7ba-4686-a994-85c0a9385e40-kube-api-access-7ldnw\") pod \"408d264a-b7ba-4686-a994-85c0a9385e40\" (UID: \"408d264a-b7ba-4686-a994-85c0a9385e40\") " Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.182760 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/408d264a-b7ba-4686-a994-85c0a9385e40-dns-svc\") pod \"408d264a-b7ba-4686-a994-85c0a9385e40\" (UID: \"408d264a-b7ba-4686-a994-85c0a9385e40\") " Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.183082 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/408d264a-b7ba-4686-a994-85c0a9385e40-config" (OuterVolumeSpecName: "config") pod "408d264a-b7ba-4686-a994-85c0a9385e40" (UID: "408d264a-b7ba-4686-a994-85c0a9385e40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.183308 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2bv9\" (UniqueName: \"kubernetes.io/projected/65604713-c812-403a-8f50-523cc35d4e30-kube-api-access-h2bv9\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.183358 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65604713-c812-403a-8f50-523cc35d4e30-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.183379 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/408d264a-b7ba-4686-a994-85c0a9385e40-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "408d264a-b7ba-4686-a994-85c0a9385e40" (UID: "408d264a-b7ba-4686-a994-85c0a9385e40"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.184542 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrnqt" event={"ID":"decb5fa1-a3bf-46fb-8662-116a233e9dc7","Type":"ContainerStarted","Data":"90b3be083f3ee0d0c4b19e503d786a5c2501b1718d153f8bc28e8ce8539b18cf"} Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.185474 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/408d264a-b7ba-4686-a994-85c0a9385e40-kube-api-access-7ldnw" (OuterVolumeSpecName: "kube-api-access-7ldnw") pod "408d264a-b7ba-4686-a994-85c0a9385e40" (UID: "408d264a-b7ba-4686-a994-85c0a9385e40"). InnerVolumeSpecName "kube-api-access-7ldnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.186290 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d77aebb6-5e14-4958-b762-6e1f2e2c236e","Type":"ContainerStarted","Data":"a970b5b5da360ad5962149ed035d55f7abb74c1ed9ce44a8b75bd85c5341875c"} Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.187825 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"60f684bd-051c-4608-8c11-1058cd2d6a01","Type":"ContainerStarted","Data":"7321d199cd79f17689d66004060101e7046bc926120421085d6c17ae90cbfbd1"} Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.190245 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p7twj" event={"ID":"480861f6-44ea-41c3-806e-497f3177eb91","Type":"ContainerStarted","Data":"96eb488b883d0d86d0df2520a0b92b7121a2c9ec012065d73908842957a34c45"} Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.191253 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"43af4b7b-306d-4c1d-9947-f4749eeed848","Type":"ContainerStarted","Data":"30bedb6fdb57b807719d2fec8210ff14b95ff124a89d9659c78e18b11d3a916d"} Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.192744 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qkf4x" event={"ID":"eda30c6d-d71c-417b-8434-ff87281d64c7","Type":"ContainerStarted","Data":"9f17b08ff893c90c1b093c1fb77443a3f39bb6ac18fb41344202f5d7ba0d6440"} Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.198803 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4f33de66-48c8-4dfb-954d-bf70e5791e04","Type":"ContainerStarted","Data":"543659d4f4415f9fc5f94a8cf347583d16ce0cb813835a7b481a666a9b760118"} Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.199403 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.215351 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d3dda1e4-d043-4acc-ba59-2c64762956be","Type":"ContainerStarted","Data":"f249fb6d75c5ea2f96027353f3c69a2b53f0c96f82e4cb47bb7dcca02e6d9ad1"} Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.216795 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2856fe37-3113-44d2-ac52-f28f9d5aba38","Type":"ContainerStarted","Data":"fb4c20683bf7629f431bb5b1b8edb753acb60f49b21e2a8e7699d83968fd1f2a"} Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.217809 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-2zgdn" event={"ID":"408d264a-b7ba-4686-a994-85c0a9385e40","Type":"ContainerDied","Data":"25fd3081078e5096fad84f2080d511d52a59a7db1bf7153d101941cc1f55e125"} Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.217882 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2zgdn" Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.223301 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-k7qvh" event={"ID":"65604713-c812-403a-8f50-523cc35d4e30","Type":"ContainerDied","Data":"2e4b9ef917bd48b1cdae16a94febd997c3c865afdfc6e4941fb015655f83c16f"} Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.223408 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-k7qvh" Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.250166 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=7.824213685 podStartE2EDuration="26.250149056s" podCreationTimestamp="2026-02-02 13:16:54 +0000 UTC" firstStartedPulling="2026-02-02 13:17:00.400804498 +0000 UTC m=+871.313140948" lastFinishedPulling="2026-02-02 13:17:18.826739859 +0000 UTC m=+889.739076319" observedRunningTime="2026-02-02 13:17:20.241236661 +0000 UTC m=+891.153573111" watchObservedRunningTime="2026-02-02 13:17:20.250149056 +0000 UTC m=+891.162485496" Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.285053 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2zgdn"] Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.285468 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/408d264a-b7ba-4686-a994-85c0a9385e40-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.285497 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ldnw\" (UniqueName: \"kubernetes.io/projected/408d264a-b7ba-4686-a994-85c0a9385e40-kube-api-access-7ldnw\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.285509 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/408d264a-b7ba-4686-a994-85c0a9385e40-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.292690 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2zgdn"] Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.330375 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-k7qvh"] Feb 02 13:17:20 crc kubenswrapper[4955]: I0202 13:17:20.335503 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-k7qvh"] Feb 02 13:17:21 crc kubenswrapper[4955]: I0202 13:17:21.233082 4955 generic.go:334] "Generic (PLEG): container finished" podID="eda30c6d-d71c-417b-8434-ff87281d64c7" containerID="e09ca5a3a3938af4edb7ac8140bd2e4504a11cd7d64922e299e62888d0e7b0b2" exitCode=0 Feb 02 13:17:21 crc kubenswrapper[4955]: I0202 13:17:21.233663 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qkf4x" event={"ID":"eda30c6d-d71c-417b-8434-ff87281d64c7","Type":"ContainerDied","Data":"e09ca5a3a3938af4edb7ac8140bd2e4504a11cd7d64922e299e62888d0e7b0b2"} Feb 02 13:17:21 crc kubenswrapper[4955]: I0202 13:17:21.237251 4955 generic.go:334] "Generic (PLEG): container finished" podID="decb5fa1-a3bf-46fb-8662-116a233e9dc7" containerID="b57d9ac386e522494eb40965a3281cf63b7f4c2b68bd5f3acb28b3d519358a78" exitCode=0 Feb 02 13:17:21 crc kubenswrapper[4955]: I0202 13:17:21.238137 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrnqt" event={"ID":"decb5fa1-a3bf-46fb-8662-116a233e9dc7","Type":"ContainerDied","Data":"b57d9ac386e522494eb40965a3281cf63b7f4c2b68bd5f3acb28b3d519358a78"} Feb 02 13:17:21 crc kubenswrapper[4955]: I0202 13:17:21.724318 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="408d264a-b7ba-4686-a994-85c0a9385e40" path="/var/lib/kubelet/pods/408d264a-b7ba-4686-a994-85c0a9385e40/volumes" Feb 02 13:17:21 crc kubenswrapper[4955]: I0202 13:17:21.724955 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65604713-c812-403a-8f50-523cc35d4e30" path="/var/lib/kubelet/pods/65604713-c812-403a-8f50-523cc35d4e30/volumes" Feb 02 13:17:23 crc kubenswrapper[4955]: E0202 13:17:23.488601 4955 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd77aebb6_5e14_4958_b762_6e1f2e2c236e.slice/crio-conmon-a970b5b5da360ad5962149ed035d55f7abb74c1ed9ce44a8b75bd85c5341875c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3dda1e4_d043_4acc_ba59_2c64762956be.slice/crio-f249fb6d75c5ea2f96027353f3c69a2b53f0c96f82e4cb47bb7dcca02e6d9ad1.scope\": RecentStats: unable to find data in memory cache]" Feb 02 13:17:24 crc kubenswrapper[4955]: I0202 13:17:24.256425 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2856fe37-3113-44d2-ac52-f28f9d5aba38","Type":"ContainerStarted","Data":"111af66161d26658fb8802f74c4b72157c697750f06090c417848c65a13f08a8"} Feb 02 13:17:24 crc kubenswrapper[4955]: I0202 13:17:24.258407 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-h5j4t" event={"ID":"b42e4a2b-d820-45ba-afdf-ab9e0a6787a9","Type":"ContainerStarted","Data":"c10ea088c69cbaab85e4f2242c24b0b2bfaf23d2f267ca7cf22edb22ea2911b2"} Feb 02 13:17:24 crc kubenswrapper[4955]: I0202 13:17:24.264092 4955 generic.go:334] "Generic (PLEG): container finished" podID="d77aebb6-5e14-4958-b762-6e1f2e2c236e" containerID="a970b5b5da360ad5962149ed035d55f7abb74c1ed9ce44a8b75bd85c5341875c" exitCode=0 Feb 02 13:17:24 crc kubenswrapper[4955]: I0202 13:17:24.264149 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d77aebb6-5e14-4958-b762-6e1f2e2c236e","Type":"ContainerDied","Data":"a970b5b5da360ad5962149ed035d55f7abb74c1ed9ce44a8b75bd85c5341875c"} Feb 02 13:17:24 crc kubenswrapper[4955]: I0202 13:17:24.266610 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p7twj" event={"ID":"480861f6-44ea-41c3-806e-497f3177eb91","Type":"ContainerStarted","Data":"9b8eda607faa5677bf3c006c4634af91f19f3d5ad4e82e6d1d0f625c24a84b0a"} Feb 02 13:17:24 crc kubenswrapper[4955]: I0202 13:17:24.267050 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-p7twj" Feb 02 13:17:24 crc kubenswrapper[4955]: I0202 13:17:24.271362 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"43af4b7b-306d-4c1d-9947-f4749eeed848","Type":"ContainerStarted","Data":"ea40b12d4c7267a5ebf70e40489a897657c94099afbaa3046807746c3a206923"} Feb 02 13:17:24 crc kubenswrapper[4955]: I0202 13:17:24.273621 4955 generic.go:334] "Generic (PLEG): container finished" podID="decb5fa1-a3bf-46fb-8662-116a233e9dc7" containerID="bc3a1e7fdc193bc89414ff5112d7e39aeac248a956fa6f1762ed7ba1823b3862" exitCode=0 Feb 02 13:17:24 crc kubenswrapper[4955]: I0202 13:17:24.273681 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrnqt" event={"ID":"decb5fa1-a3bf-46fb-8662-116a233e9dc7","Type":"ContainerDied","Data":"bc3a1e7fdc193bc89414ff5112d7e39aeac248a956fa6f1762ed7ba1823b3862"} Feb 02 13:17:24 crc kubenswrapper[4955]: I0202 13:17:24.276076 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4a5d9947-5c29-430e-b975-78024809faed","Type":"ContainerStarted","Data":"d933295a784ad31146861b17fbbe3681e8b8004cabd5d26bc1ae1ad26b70e093"} Feb 02 13:17:24 crc kubenswrapper[4955]: I0202 13:17:24.276889 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 13:17:24 crc kubenswrapper[4955]: I0202 13:17:24.283787 4955 generic.go:334] "Generic (PLEG): container finished" podID="d3dda1e4-d043-4acc-ba59-2c64762956be" containerID="f249fb6d75c5ea2f96027353f3c69a2b53f0c96f82e4cb47bb7dcca02e6d9ad1" exitCode=0 Feb 02 13:17:24 crc kubenswrapper[4955]: I0202 13:17:24.284104 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d3dda1e4-d043-4acc-ba59-2c64762956be","Type":"ContainerDied","Data":"f249fb6d75c5ea2f96027353f3c69a2b53f0c96f82e4cb47bb7dcca02e6d9ad1"} Feb 02 13:17:24 crc kubenswrapper[4955]: I0202 13:17:24.336704 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=22.579129771 podStartE2EDuration="28.3366847s" podCreationTimestamp="2026-02-02 13:16:56 +0000 UTC" firstStartedPulling="2026-02-02 13:17:17.937769451 +0000 UTC m=+888.850105891" lastFinishedPulling="2026-02-02 13:17:23.69532437 +0000 UTC m=+894.607660820" observedRunningTime="2026-02-02 13:17:24.329978137 +0000 UTC m=+895.242314577" watchObservedRunningTime="2026-02-02 13:17:24.3366847 +0000 UTC m=+895.249021150" Feb 02 13:17:24 crc kubenswrapper[4955]: I0202 13:17:24.356179 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-p7twj" podStartSLOduration=20.962922921 podStartE2EDuration="25.35616251s" podCreationTimestamp="2026-02-02 13:16:59 +0000 UTC" firstStartedPulling="2026-02-02 13:17:19.314336916 +0000 UTC m=+890.226673356" lastFinishedPulling="2026-02-02 13:17:23.707576495 +0000 UTC m=+894.619912945" observedRunningTime="2026-02-02 13:17:24.349646262 +0000 UTC m=+895.261982732" watchObservedRunningTime="2026-02-02 13:17:24.35616251 +0000 UTC m=+895.268498960" Feb 02 13:17:24 crc kubenswrapper[4955]: I0202 13:17:24.768795 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 02 13:17:25 crc kubenswrapper[4955]: I0202 13:17:25.301518 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d77aebb6-5e14-4958-b762-6e1f2e2c236e","Type":"ContainerStarted","Data":"f568d79fa73522821dad0097c21d9ef486a824fdb0cfd0ad802bc5ad69d235d2"} Feb 02 13:17:25 crc kubenswrapper[4955]: I0202 13:17:25.304024 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrnqt" event={"ID":"decb5fa1-a3bf-46fb-8662-116a233e9dc7","Type":"ContainerStarted","Data":"b3ef18648ad5886c36ad1612cf04608fd08ed71bfb560b8b32baff5365aa5d96"} Feb 02 13:17:25 crc kubenswrapper[4955]: I0202 13:17:25.308336 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d3dda1e4-d043-4acc-ba59-2c64762956be","Type":"ContainerStarted","Data":"7f606319bfde5003f9ac3180ec93aeeb506417cebf60dca5e4c9f6b79c4e4b9f"} Feb 02 13:17:25 crc kubenswrapper[4955]: I0202 13:17:25.309874 4955 generic.go:334] "Generic (PLEG): container finished" podID="b42e4a2b-d820-45ba-afdf-ab9e0a6787a9" containerID="c10ea088c69cbaab85e4f2242c24b0b2bfaf23d2f267ca7cf22edb22ea2911b2" exitCode=0 Feb 02 13:17:25 crc kubenswrapper[4955]: I0202 13:17:25.309926 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-h5j4t" event={"ID":"b42e4a2b-d820-45ba-afdf-ab9e0a6787a9","Type":"ContainerDied","Data":"c10ea088c69cbaab85e4f2242c24b0b2bfaf23d2f267ca7cf22edb22ea2911b2"} Feb 02 13:17:25 crc kubenswrapper[4955]: I0202 13:17:25.311271 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"827225b6-1672-40b1-a9ee-7dd2d5db2d1d","Type":"ContainerStarted","Data":"216837147ce032310375d7abc39ae39496834919c5c5e8977850993a32772a6f"} Feb 02 13:17:25 crc kubenswrapper[4955]: I0202 13:17:25.326264 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.289931491 podStartE2EDuration="34.326236137s" podCreationTimestamp="2026-02-02 13:16:51 +0000 UTC" firstStartedPulling="2026-02-02 13:16:53.759471155 +0000 UTC m=+864.671807595" lastFinishedPulling="2026-02-02 13:17:18.795775791 +0000 UTC m=+889.708112241" observedRunningTime="2026-02-02 13:17:25.320705365 +0000 UTC m=+896.233041815" watchObservedRunningTime="2026-02-02 13:17:25.326236137 +0000 UTC m=+896.238572587" Feb 02 13:17:25 crc kubenswrapper[4955]: I0202 13:17:25.369989 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=13.881240116 podStartE2EDuration="32.369971344s" podCreationTimestamp="2026-02-02 13:16:53 +0000 UTC" firstStartedPulling="2026-02-02 13:17:00.38970896 +0000 UTC m=+871.302045400" lastFinishedPulling="2026-02-02 13:17:18.878440168 +0000 UTC m=+889.790776628" observedRunningTime="2026-02-02 13:17:25.367222118 +0000 UTC m=+896.279558588" watchObservedRunningTime="2026-02-02 13:17:25.369971344 +0000 UTC m=+896.282307794" Feb 02 13:17:25 crc kubenswrapper[4955]: I0202 13:17:25.398660 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zrnqt" podStartSLOduration=5.137698372 podStartE2EDuration="8.398643616s" podCreationTimestamp="2026-02-02 13:17:17 +0000 UTC" firstStartedPulling="2026-02-02 13:17:21.449375769 +0000 UTC m=+892.361712219" lastFinishedPulling="2026-02-02 13:17:24.710321013 +0000 UTC m=+895.622657463" observedRunningTime="2026-02-02 13:17:25.388229255 +0000 UTC m=+896.300565715" watchObservedRunningTime="2026-02-02 13:17:25.398643616 +0000 UTC m=+896.310980066" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.012805 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qvpbz"] Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.054690 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-b7qd7"] Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.058148 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-b7qd7" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.093511 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-b7qd7"] Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.214802 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n2g8w"] Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.215627 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc261114-ebe8-444a-aaaf-517260085546-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-b7qd7\" (UID: \"fc261114-ebe8-444a-aaaf-517260085546\") " pod="openstack/dnsmasq-dns-7cb5889db5-b7qd7" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.215706 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc261114-ebe8-444a-aaaf-517260085546-config\") pod \"dnsmasq-dns-7cb5889db5-b7qd7\" (UID: \"fc261114-ebe8-444a-aaaf-517260085546\") " pod="openstack/dnsmasq-dns-7cb5889db5-b7qd7" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.215812 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnkdr\" (UniqueName: \"kubernetes.io/projected/fc261114-ebe8-444a-aaaf-517260085546-kube-api-access-vnkdr\") pod \"dnsmasq-dns-7cb5889db5-b7qd7\" (UID: \"fc261114-ebe8-444a-aaaf-517260085546\") " pod="openstack/dnsmasq-dns-7cb5889db5-b7qd7" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.216277 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n2g8w" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.242731 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n2g8w"] Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.317282 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/735a40d7-b7b4-461b-b99f-6557672748e7-utilities\") pod \"community-operators-n2g8w\" (UID: \"735a40d7-b7b4-461b-b99f-6557672748e7\") " pod="openshift-marketplace/community-operators-n2g8w" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.317400 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95mrw\" (UniqueName: \"kubernetes.io/projected/735a40d7-b7b4-461b-b99f-6557672748e7-kube-api-access-95mrw\") pod \"community-operators-n2g8w\" (UID: \"735a40d7-b7b4-461b-b99f-6557672748e7\") " pod="openshift-marketplace/community-operators-n2g8w" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.317428 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnkdr\" (UniqueName: \"kubernetes.io/projected/fc261114-ebe8-444a-aaaf-517260085546-kube-api-access-vnkdr\") pod \"dnsmasq-dns-7cb5889db5-b7qd7\" (UID: \"fc261114-ebe8-444a-aaaf-517260085546\") " pod="openstack/dnsmasq-dns-7cb5889db5-b7qd7" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.317463 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/735a40d7-b7b4-461b-b99f-6557672748e7-catalog-content\") pod \"community-operators-n2g8w\" (UID: \"735a40d7-b7b4-461b-b99f-6557672748e7\") " pod="openshift-marketplace/community-operators-n2g8w" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.317502 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc261114-ebe8-444a-aaaf-517260085546-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-b7qd7\" (UID: \"fc261114-ebe8-444a-aaaf-517260085546\") " pod="openstack/dnsmasq-dns-7cb5889db5-b7qd7" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.317534 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc261114-ebe8-444a-aaaf-517260085546-config\") pod \"dnsmasq-dns-7cb5889db5-b7qd7\" (UID: \"fc261114-ebe8-444a-aaaf-517260085546\") " pod="openstack/dnsmasq-dns-7cb5889db5-b7qd7" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.318437 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc261114-ebe8-444a-aaaf-517260085546-config\") pod \"dnsmasq-dns-7cb5889db5-b7qd7\" (UID: \"fc261114-ebe8-444a-aaaf-517260085546\") " pod="openstack/dnsmasq-dns-7cb5889db5-b7qd7" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.319981 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc261114-ebe8-444a-aaaf-517260085546-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-b7qd7\" (UID: \"fc261114-ebe8-444a-aaaf-517260085546\") " pod="openstack/dnsmasq-dns-7cb5889db5-b7qd7" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.361686 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnkdr\" (UniqueName: \"kubernetes.io/projected/fc261114-ebe8-444a-aaaf-517260085546-kube-api-access-vnkdr\") pod \"dnsmasq-dns-7cb5889db5-b7qd7\" (UID: \"fc261114-ebe8-444a-aaaf-517260085546\") " pod="openstack/dnsmasq-dns-7cb5889db5-b7qd7" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.398242 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-b7qd7" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.418930 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95mrw\" (UniqueName: \"kubernetes.io/projected/735a40d7-b7b4-461b-b99f-6557672748e7-kube-api-access-95mrw\") pod \"community-operators-n2g8w\" (UID: \"735a40d7-b7b4-461b-b99f-6557672748e7\") " pod="openshift-marketplace/community-operators-n2g8w" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.418990 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/735a40d7-b7b4-461b-b99f-6557672748e7-catalog-content\") pod \"community-operators-n2g8w\" (UID: \"735a40d7-b7b4-461b-b99f-6557672748e7\") " pod="openshift-marketplace/community-operators-n2g8w" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.419064 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/735a40d7-b7b4-461b-b99f-6557672748e7-utilities\") pod \"community-operators-n2g8w\" (UID: \"735a40d7-b7b4-461b-b99f-6557672748e7\") " pod="openshift-marketplace/community-operators-n2g8w" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.419539 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/735a40d7-b7b4-461b-b99f-6557672748e7-utilities\") pod \"community-operators-n2g8w\" (UID: \"735a40d7-b7b4-461b-b99f-6557672748e7\") " pod="openshift-marketplace/community-operators-n2g8w" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.420001 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/735a40d7-b7b4-461b-b99f-6557672748e7-catalog-content\") pod \"community-operators-n2g8w\" (UID: \"735a40d7-b7b4-461b-b99f-6557672748e7\") " pod="openshift-marketplace/community-operators-n2g8w" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.457400 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95mrw\" (UniqueName: \"kubernetes.io/projected/735a40d7-b7b4-461b-b99f-6557672748e7-kube-api-access-95mrw\") pod \"community-operators-n2g8w\" (UID: \"735a40d7-b7b4-461b-b99f-6557672748e7\") " pod="openshift-marketplace/community-operators-n2g8w" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.535352 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n2g8w" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.563772 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zrnqt" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.564527 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zrnqt" Feb 02 13:17:27 crc kubenswrapper[4955]: I0202 13:17:27.625891 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zrnqt" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.158640 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.163464 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.165794 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.165844 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.166281 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-zxqrk" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.166317 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.180416 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.232249 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ec1a6503-248d-4f72-a3ab-e23df2ca163d-lock\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.232300 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.232476 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-etc-swift\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.232539 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4zm8\" (UniqueName: \"kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-kube-api-access-n4zm8\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.232797 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1a6503-248d-4f72-a3ab-e23df2ca163d-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.232901 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ec1a6503-248d-4f72-a3ab-e23df2ca163d-cache\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.334415 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1a6503-248d-4f72-a3ab-e23df2ca163d-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.334472 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ec1a6503-248d-4f72-a3ab-e23df2ca163d-cache\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.334509 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ec1a6503-248d-4f72-a3ab-e23df2ca163d-lock\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.334544 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.334647 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-etc-swift\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.334684 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4zm8\" (UniqueName: \"kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-kube-api-access-n4zm8\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.335036 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ec1a6503-248d-4f72-a3ab-e23df2ca163d-cache\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:17:28 crc kubenswrapper[4955]: E0202 13:17:28.335197 4955 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 13:17:28 crc kubenswrapper[4955]: E0202 13:17:28.335244 4955 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 13:17:28 crc kubenswrapper[4955]: E0202 13:17:28.335305 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-etc-swift podName:ec1a6503-248d-4f72-a3ab-e23df2ca163d nodeName:}" failed. No retries permitted until 2026-02-02 13:17:28.835282859 +0000 UTC m=+899.747619379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-etc-swift") pod "swift-storage-0" (UID: "ec1a6503-248d-4f72-a3ab-e23df2ca163d") : configmap "swift-ring-files" not found Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.335200 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ec1a6503-248d-4f72-a3ab-e23df2ca163d-lock\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.335350 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.350471 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1a6503-248d-4f72-a3ab-e23df2ca163d-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.353453 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4zm8\" (UniqueName: \"kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-kube-api-access-n4zm8\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.378190 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.709282 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-dv4r7"] Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.710194 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.711794 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.712045 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.712702 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.722545 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dv4r7"] Feb 02 13:17:28 crc kubenswrapper[4955]: E0202 13:17:28.730216 4955 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.86:56840->38.129.56.86:34121: write tcp 38.129.56.86:56840->38.129.56.86:34121: write: broken pipe Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.832132 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qvpbz" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.842883 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-combined-ca-bundle\") pod \"swift-ring-rebalance-dv4r7\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.843003 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-dispersionconf\") pod \"swift-ring-rebalance-dv4r7\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.843194 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-etc-swift\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.843314 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-etc-swift\") pod \"swift-ring-rebalance-dv4r7\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.843337 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-ring-data-devices\") pod \"swift-ring-rebalance-dv4r7\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.843354 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn2hp\" (UniqueName: \"kubernetes.io/projected/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-kube-api-access-tn2hp\") pod \"swift-ring-rebalance-dv4r7\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.843389 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-swiftconf\") pod \"swift-ring-rebalance-dv4r7\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.843435 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-scripts\") pod \"swift-ring-rebalance-dv4r7\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:28 crc kubenswrapper[4955]: E0202 13:17:28.844106 4955 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 13:17:28 crc kubenswrapper[4955]: E0202 13:17:28.844128 4955 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 13:17:28 crc kubenswrapper[4955]: E0202 13:17:28.844168 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-etc-swift podName:ec1a6503-248d-4f72-a3ab-e23df2ca163d nodeName:}" failed. No retries permitted until 2026-02-02 13:17:29.844153258 +0000 UTC m=+900.756489708 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-etc-swift") pod "swift-storage-0" (UID: "ec1a6503-248d-4f72-a3ab-e23df2ca163d") : configmap "swift-ring-files" not found Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.944316 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dff8da18-59aa-4616-b217-66f739215534-dns-svc\") pod \"dff8da18-59aa-4616-b217-66f739215534\" (UID: \"dff8da18-59aa-4616-b217-66f739215534\") " Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.944440 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff8da18-59aa-4616-b217-66f739215534-config\") pod \"dff8da18-59aa-4616-b217-66f739215534\" (UID: \"dff8da18-59aa-4616-b217-66f739215534\") " Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.944471 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjg4j\" (UniqueName: \"kubernetes.io/projected/dff8da18-59aa-4616-b217-66f739215534-kube-api-access-xjg4j\") pod \"dff8da18-59aa-4616-b217-66f739215534\" (UID: \"dff8da18-59aa-4616-b217-66f739215534\") " Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.944729 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-combined-ca-bundle\") pod \"swift-ring-rebalance-dv4r7\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.944760 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-dispersionconf\") pod \"swift-ring-rebalance-dv4r7\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.944837 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-etc-swift\") pod \"swift-ring-rebalance-dv4r7\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.944859 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-ring-data-devices\") pod \"swift-ring-rebalance-dv4r7\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.944874 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn2hp\" (UniqueName: \"kubernetes.io/projected/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-kube-api-access-tn2hp\") pod \"swift-ring-rebalance-dv4r7\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.944896 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-swiftconf\") pod \"swift-ring-rebalance-dv4r7\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.944920 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-scripts\") pod \"swift-ring-rebalance-dv4r7\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.945390 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dff8da18-59aa-4616-b217-66f739215534-config" (OuterVolumeSpecName: "config") pod "dff8da18-59aa-4616-b217-66f739215534" (UID: "dff8da18-59aa-4616-b217-66f739215534"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.945694 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-scripts\") pod \"swift-ring-rebalance-dv4r7\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.945859 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-ring-data-devices\") pod \"swift-ring-rebalance-dv4r7\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.945967 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dff8da18-59aa-4616-b217-66f739215534-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dff8da18-59aa-4616-b217-66f739215534" (UID: "dff8da18-59aa-4616-b217-66f739215534"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.945988 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-etc-swift\") pod \"swift-ring-rebalance-dv4r7\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.954849 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dff8da18-59aa-4616-b217-66f739215534-kube-api-access-xjg4j" (OuterVolumeSpecName: "kube-api-access-xjg4j") pod "dff8da18-59aa-4616-b217-66f739215534" (UID: "dff8da18-59aa-4616-b217-66f739215534"). InnerVolumeSpecName "kube-api-access-xjg4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.955095 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-swiftconf\") pod \"swift-ring-rebalance-dv4r7\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.959932 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-dispersionconf\") pod \"swift-ring-rebalance-dv4r7\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.979611 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-combined-ca-bundle\") pod \"swift-ring-rebalance-dv4r7\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:28 crc kubenswrapper[4955]: I0202 13:17:28.985703 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn2hp\" (UniqueName: \"kubernetes.io/projected/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-kube-api-access-tn2hp\") pod \"swift-ring-rebalance-dv4r7\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:29 crc kubenswrapper[4955]: I0202 13:17:29.026020 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:29 crc kubenswrapper[4955]: I0202 13:17:29.046916 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dff8da18-59aa-4616-b217-66f739215534-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:29 crc kubenswrapper[4955]: I0202 13:17:29.046948 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff8da18-59aa-4616-b217-66f739215534-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:29 crc kubenswrapper[4955]: I0202 13:17:29.046958 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjg4j\" (UniqueName: \"kubernetes.io/projected/dff8da18-59aa-4616-b217-66f739215534-kube-api-access-xjg4j\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:29 crc kubenswrapper[4955]: I0202 13:17:29.372743 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qvpbz" Feb 02 13:17:29 crc kubenswrapper[4955]: I0202 13:17:29.372828 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qvpbz" event={"ID":"dff8da18-59aa-4616-b217-66f739215534","Type":"ContainerDied","Data":"f1be7f1e39db8edfccc6bcb947980cb7d02475416bd812b43bf067bb3a672c22"} Feb 02 13:17:29 crc kubenswrapper[4955]: I0202 13:17:29.440282 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zrnqt" Feb 02 13:17:29 crc kubenswrapper[4955]: I0202 13:17:29.450704 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qvpbz"] Feb 02 13:17:29 crc kubenswrapper[4955]: I0202 13:17:29.473574 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qvpbz"] Feb 02 13:17:29 crc kubenswrapper[4955]: I0202 13:17:29.731533 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dff8da18-59aa-4616-b217-66f739215534" path="/var/lib/kubelet/pods/dff8da18-59aa-4616-b217-66f739215534/volumes" Feb 02 13:17:29 crc kubenswrapper[4955]: I0202 13:17:29.772036 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrnqt"] Feb 02 13:17:29 crc kubenswrapper[4955]: I0202 13:17:29.863718 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-etc-swift\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:17:29 crc kubenswrapper[4955]: E0202 13:17:29.865646 4955 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 13:17:29 crc kubenswrapper[4955]: E0202 13:17:29.865912 4955 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 13:17:29 crc kubenswrapper[4955]: E0202 13:17:29.865959 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-etc-swift podName:ec1a6503-248d-4f72-a3ab-e23df2ca163d nodeName:}" failed. No retries permitted until 2026-02-02 13:17:31.865942755 +0000 UTC m=+902.778279205 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-etc-swift") pod "swift-storage-0" (UID: "ec1a6503-248d-4f72-a3ab-e23df2ca163d") : configmap "swift-ring-files" not found Feb 02 13:17:30 crc kubenswrapper[4955]: I0202 13:17:30.230098 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n2g8w"] Feb 02 13:17:30 crc kubenswrapper[4955]: W0202 13:17:30.266776 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735a40d7_b7b4_461b_b99f_6557672748e7.slice/crio-9333eb086d853de1a39b209aa6329aac6c90be3dc44ebb10ba3048ab1c34236c WatchSource:0}: Error finding container 9333eb086d853de1a39b209aa6329aac6c90be3dc44ebb10ba3048ab1c34236c: Status 404 returned error can't find the container with id 9333eb086d853de1a39b209aa6329aac6c90be3dc44ebb10ba3048ab1c34236c Feb 02 13:17:30 crc kubenswrapper[4955]: I0202 13:17:30.416316 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2g8w" event={"ID":"735a40d7-b7b4-461b-b99f-6557672748e7","Type":"ContainerStarted","Data":"9333eb086d853de1a39b209aa6329aac6c90be3dc44ebb10ba3048ab1c34236c"} Feb 02 13:17:30 crc kubenswrapper[4955]: I0202 13:17:30.601358 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-b7qd7"] Feb 02 13:17:30 crc kubenswrapper[4955]: W0202 13:17:30.601813 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc261114_ebe8_444a_aaaf_517260085546.slice/crio-904a2922949859ab1be7f2f7e56b6a6b3dfde3d160789649e15dd9faa199ec00 WatchSource:0}: Error finding container 904a2922949859ab1be7f2f7e56b6a6b3dfde3d160789649e15dd9faa199ec00: Status 404 returned error can't find the container with id 904a2922949859ab1be7f2f7e56b6a6b3dfde3d160789649e15dd9faa199ec00 Feb 02 13:17:30 crc kubenswrapper[4955]: I0202 13:17:30.628141 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dv4r7"] Feb 02 13:17:30 crc kubenswrapper[4955]: W0202 13:17:30.638869 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3cb6dc2_d198_405c_816a_dd3ddb578ed4.slice/crio-27d519508c7767adcbfb014136d61326e2caf41875c42e3f2c067bbbf5da3d28 WatchSource:0}: Error finding container 27d519508c7767adcbfb014136d61326e2caf41875c42e3f2c067bbbf5da3d28: Status 404 returned error can't find the container with id 27d519508c7767adcbfb014136d61326e2caf41875c42e3f2c067bbbf5da3d28 Feb 02 13:17:31 crc kubenswrapper[4955]: I0202 13:17:31.430983 4955 generic.go:334] "Generic (PLEG): container finished" podID="fc261114-ebe8-444a-aaaf-517260085546" containerID="4f570f8ff60d70defc629ab252cdb8f5c352698eecd216d3e91fc263a288af52" exitCode=0 Feb 02 13:17:31 crc kubenswrapper[4955]: I0202 13:17:31.431119 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-b7qd7" event={"ID":"fc261114-ebe8-444a-aaaf-517260085546","Type":"ContainerDied","Data":"4f570f8ff60d70defc629ab252cdb8f5c352698eecd216d3e91fc263a288af52"} Feb 02 13:17:31 crc kubenswrapper[4955]: I0202 13:17:31.431617 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-b7qd7" event={"ID":"fc261114-ebe8-444a-aaaf-517260085546","Type":"ContainerStarted","Data":"904a2922949859ab1be7f2f7e56b6a6b3dfde3d160789649e15dd9faa199ec00"} Feb 02 13:17:31 crc kubenswrapper[4955]: I0202 13:17:31.439093 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"43af4b7b-306d-4c1d-9947-f4749eeed848","Type":"ContainerStarted","Data":"5e2ec716742725921b1e95952a3ef14dc4531d50d8902f887d0b2917021a5e8d"} Feb 02 13:17:31 crc kubenswrapper[4955]: I0202 13:17:31.442598 4955 generic.go:334] "Generic (PLEG): container finished" podID="eda30c6d-d71c-417b-8434-ff87281d64c7" containerID="5be7c7b95bb3ae2c289ed616800cf637584d45a81265b110304f4a4352352a69" exitCode=0 Feb 02 13:17:31 crc kubenswrapper[4955]: I0202 13:17:31.447455 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qkf4x" event={"ID":"eda30c6d-d71c-417b-8434-ff87281d64c7","Type":"ContainerDied","Data":"5be7c7b95bb3ae2c289ed616800cf637584d45a81265b110304f4a4352352a69"} Feb 02 13:17:31 crc kubenswrapper[4955]: I0202 13:17:31.461395 4955 generic.go:334] "Generic (PLEG): container finished" podID="735a40d7-b7b4-461b-b99f-6557672748e7" containerID="0db8154f17039b7ef107155f12d12717bc6bff63ece06f3ebd3924ad281e99e7" exitCode=0 Feb 02 13:17:31 crc kubenswrapper[4955]: I0202 13:17:31.461479 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2g8w" event={"ID":"735a40d7-b7b4-461b-b99f-6557672748e7","Type":"ContainerDied","Data":"0db8154f17039b7ef107155f12d12717bc6bff63ece06f3ebd3924ad281e99e7"} Feb 02 13:17:31 crc kubenswrapper[4955]: I0202 13:17:31.472126 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2856fe37-3113-44d2-ac52-f28f9d5aba38","Type":"ContainerStarted","Data":"51cff2f48abd72af09d736b1fbb6e12d8a10050f361020e8fe52637b88318381"} Feb 02 13:17:31 crc kubenswrapper[4955]: I0202 13:17:31.516751 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-h5j4t" event={"ID":"b42e4a2b-d820-45ba-afdf-ab9e0a6787a9","Type":"ContainerStarted","Data":"9b1dca1109cc7ca081874817fca1eb08886adddf3f123ab485e9c485dd5b60fc"} Feb 02 13:17:31 crc kubenswrapper[4955]: I0202 13:17:31.516845 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:17:31 crc kubenswrapper[4955]: I0202 13:17:31.516864 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-h5j4t" event={"ID":"b42e4a2b-d820-45ba-afdf-ab9e0a6787a9","Type":"ContainerStarted","Data":"dbe2aed60ad98f79103193ea1f0deb6057869388580100c465f961d7ed4137e7"} Feb 02 13:17:31 crc kubenswrapper[4955]: I0202 13:17:31.519327 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zrnqt" podUID="decb5fa1-a3bf-46fb-8662-116a233e9dc7" containerName="registry-server" containerID="cri-o://b3ef18648ad5886c36ad1612cf04608fd08ed71bfb560b8b32baff5365aa5d96" gracePeriod=2 Feb 02 13:17:31 crc kubenswrapper[4955]: I0202 13:17:31.520013 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dv4r7" event={"ID":"b3cb6dc2-d198-405c-816a-dd3ddb578ed4","Type":"ContainerStarted","Data":"27d519508c7767adcbfb014136d61326e2caf41875c42e3f2c067bbbf5da3d28"} Feb 02 13:17:31 crc kubenswrapper[4955]: I0202 13:17:31.532232 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=19.331715618 podStartE2EDuration="29.532208478s" podCreationTimestamp="2026-02-02 13:17:02 +0000 UTC" firstStartedPulling="2026-02-02 13:17:19.964807025 +0000 UTC m=+890.877143475" lastFinishedPulling="2026-02-02 13:17:30.165299885 +0000 UTC m=+901.077636335" observedRunningTime="2026-02-02 13:17:31.526151032 +0000 UTC m=+902.438487482" watchObservedRunningTime="2026-02-02 13:17:31.532208478 +0000 UTC m=+902.444544928" Feb 02 13:17:31 crc kubenswrapper[4955]: I0202 13:17:31.564099 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=21.782180357 podStartE2EDuration="31.564078358s" podCreationTimestamp="2026-02-02 13:17:00 +0000 UTC" firstStartedPulling="2026-02-02 13:17:19.990236029 +0000 UTC m=+890.902572479" lastFinishedPulling="2026-02-02 13:17:29.77213402 +0000 UTC m=+900.684470480" observedRunningTime="2026-02-02 13:17:31.562503879 +0000 UTC m=+902.474840349" watchObservedRunningTime="2026-02-02 13:17:31.564078358 +0000 UTC m=+902.476414808" Feb 02 13:17:31 crc kubenswrapper[4955]: I0202 13:17:31.599285 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-h5j4t" podStartSLOduration=27.678491087 podStartE2EDuration="32.599268857s" podCreationTimestamp="2026-02-02 13:16:59 +0000 UTC" firstStartedPulling="2026-02-02 13:17:18.778002693 +0000 UTC m=+889.690339143" lastFinishedPulling="2026-02-02 13:17:23.698780463 +0000 UTC m=+894.611116913" observedRunningTime="2026-02-02 13:17:31.595165668 +0000 UTC m=+902.507502128" watchObservedRunningTime="2026-02-02 13:17:31.599268857 +0000 UTC m=+902.511605307" Feb 02 13:17:31 crc kubenswrapper[4955]: I0202 13:17:31.923764 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-etc-swift\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:17:31 crc kubenswrapper[4955]: E0202 13:17:31.924065 4955 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 13:17:31 crc kubenswrapper[4955]: E0202 13:17:31.924257 4955 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 13:17:31 crc kubenswrapper[4955]: E0202 13:17:31.924311 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-etc-swift podName:ec1a6503-248d-4f72-a3ab-e23df2ca163d nodeName:}" failed. No retries permitted until 2026-02-02 13:17:35.924292667 +0000 UTC m=+906.836629127 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-etc-swift") pod "swift-storage-0" (UID: "ec1a6503-248d-4f72-a3ab-e23df2ca163d") : configmap "swift-ring-files" not found Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.034472 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zrnqt" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.128124 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decb5fa1-a3bf-46fb-8662-116a233e9dc7-utilities\") pod \"decb5fa1-a3bf-46fb-8662-116a233e9dc7\" (UID: \"decb5fa1-a3bf-46fb-8662-116a233e9dc7\") " Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.128194 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decb5fa1-a3bf-46fb-8662-116a233e9dc7-catalog-content\") pod \"decb5fa1-a3bf-46fb-8662-116a233e9dc7\" (UID: \"decb5fa1-a3bf-46fb-8662-116a233e9dc7\") " Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.128300 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwcw5\" (UniqueName: \"kubernetes.io/projected/decb5fa1-a3bf-46fb-8662-116a233e9dc7-kube-api-access-wwcw5\") pod \"decb5fa1-a3bf-46fb-8662-116a233e9dc7\" (UID: \"decb5fa1-a3bf-46fb-8662-116a233e9dc7\") " Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.129581 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/decb5fa1-a3bf-46fb-8662-116a233e9dc7-utilities" (OuterVolumeSpecName: "utilities") pod "decb5fa1-a3bf-46fb-8662-116a233e9dc7" (UID: "decb5fa1-a3bf-46fb-8662-116a233e9dc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.136221 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/decb5fa1-a3bf-46fb-8662-116a233e9dc7-kube-api-access-wwcw5" (OuterVolumeSpecName: "kube-api-access-wwcw5") pod "decb5fa1-a3bf-46fb-8662-116a233e9dc7" (UID: "decb5fa1-a3bf-46fb-8662-116a233e9dc7"). InnerVolumeSpecName "kube-api-access-wwcw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.185164 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/decb5fa1-a3bf-46fb-8662-116a233e9dc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "decb5fa1-a3bf-46fb-8662-116a233e9dc7" (UID: "decb5fa1-a3bf-46fb-8662-116a233e9dc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.200840 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.200888 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.229612 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decb5fa1-a3bf-46fb-8662-116a233e9dc7-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.229651 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decb5fa1-a3bf-46fb-8662-116a233e9dc7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.229669 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwcw5\" (UniqueName: \"kubernetes.io/projected/decb5fa1-a3bf-46fb-8662-116a233e9dc7-kube-api-access-wwcw5\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.251910 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.528438 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-b7qd7" event={"ID":"fc261114-ebe8-444a-aaaf-517260085546","Type":"ContainerStarted","Data":"56057c403cea4f101a0af4b046c5af2a04c05bb0b5fbb194855c52bd83e93dc2"} Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.528597 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-b7qd7" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.534543 4955 generic.go:334] "Generic (PLEG): container finished" podID="decb5fa1-a3bf-46fb-8662-116a233e9dc7" containerID="b3ef18648ad5886c36ad1612cf04608fd08ed71bfb560b8b32baff5365aa5d96" exitCode=0 Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.534590 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrnqt" event={"ID":"decb5fa1-a3bf-46fb-8662-116a233e9dc7","Type":"ContainerDied","Data":"b3ef18648ad5886c36ad1612cf04608fd08ed71bfb560b8b32baff5365aa5d96"} Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.534629 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrnqt" event={"ID":"decb5fa1-a3bf-46fb-8662-116a233e9dc7","Type":"ContainerDied","Data":"90b3be083f3ee0d0c4b19e503d786a5c2501b1718d153f8bc28e8ce8539b18cf"} Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.534651 4955 scope.go:117] "RemoveContainer" containerID="b3ef18648ad5886c36ad1612cf04608fd08ed71bfb560b8b32baff5365aa5d96" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.535368 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.535767 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zrnqt" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.550317 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-b7qd7" podStartSLOduration=5.128896648 podStartE2EDuration="5.550298735s" podCreationTimestamp="2026-02-02 13:17:27 +0000 UTC" firstStartedPulling="2026-02-02 13:17:30.604358789 +0000 UTC m=+901.516695229" lastFinishedPulling="2026-02-02 13:17:31.025760866 +0000 UTC m=+901.938097316" observedRunningTime="2026-02-02 13:17:32.545840698 +0000 UTC m=+903.458177148" watchObservedRunningTime="2026-02-02 13:17:32.550298735 +0000 UTC m=+903.462635185" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.556028 4955 scope.go:117] "RemoveContainer" containerID="bc3a1e7fdc193bc89414ff5112d7e39aeac248a956fa6f1762ed7ba1823b3862" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.597074 4955 scope.go:117] "RemoveContainer" containerID="b57d9ac386e522494eb40965a3281cf63b7f4c2b68bd5f3acb28b3d519358a78" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.600184 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.601045 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrnqt"] Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.608609 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrnqt"] Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.632264 4955 scope.go:117] "RemoveContainer" containerID="b3ef18648ad5886c36ad1612cf04608fd08ed71bfb560b8b32baff5365aa5d96" Feb 02 13:17:32 crc kubenswrapper[4955]: E0202 13:17:32.634531 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3ef18648ad5886c36ad1612cf04608fd08ed71bfb560b8b32baff5365aa5d96\": container with ID starting with b3ef18648ad5886c36ad1612cf04608fd08ed71bfb560b8b32baff5365aa5d96 not found: ID does not exist" containerID="b3ef18648ad5886c36ad1612cf04608fd08ed71bfb560b8b32baff5365aa5d96" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.634623 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ef18648ad5886c36ad1612cf04608fd08ed71bfb560b8b32baff5365aa5d96"} err="failed to get container status \"b3ef18648ad5886c36ad1612cf04608fd08ed71bfb560b8b32baff5365aa5d96\": rpc error: code = NotFound desc = could not find container \"b3ef18648ad5886c36ad1612cf04608fd08ed71bfb560b8b32baff5365aa5d96\": container with ID starting with b3ef18648ad5886c36ad1612cf04608fd08ed71bfb560b8b32baff5365aa5d96 not found: ID does not exist" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.634645 4955 scope.go:117] "RemoveContainer" containerID="bc3a1e7fdc193bc89414ff5112d7e39aeac248a956fa6f1762ed7ba1823b3862" Feb 02 13:17:32 crc kubenswrapper[4955]: E0202 13:17:32.635158 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc3a1e7fdc193bc89414ff5112d7e39aeac248a956fa6f1762ed7ba1823b3862\": container with ID starting with bc3a1e7fdc193bc89414ff5112d7e39aeac248a956fa6f1762ed7ba1823b3862 not found: ID does not exist" containerID="bc3a1e7fdc193bc89414ff5112d7e39aeac248a956fa6f1762ed7ba1823b3862" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.635192 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc3a1e7fdc193bc89414ff5112d7e39aeac248a956fa6f1762ed7ba1823b3862"} err="failed to get container status \"bc3a1e7fdc193bc89414ff5112d7e39aeac248a956fa6f1762ed7ba1823b3862\": rpc error: code = NotFound desc = could not find container \"bc3a1e7fdc193bc89414ff5112d7e39aeac248a956fa6f1762ed7ba1823b3862\": container with ID starting with bc3a1e7fdc193bc89414ff5112d7e39aeac248a956fa6f1762ed7ba1823b3862 not found: ID does not exist" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.635208 4955 scope.go:117] "RemoveContainer" containerID="b57d9ac386e522494eb40965a3281cf63b7f4c2b68bd5f3acb28b3d519358a78" Feb 02 13:17:32 crc kubenswrapper[4955]: E0202 13:17:32.635442 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b57d9ac386e522494eb40965a3281cf63b7f4c2b68bd5f3acb28b3d519358a78\": container with ID starting with b57d9ac386e522494eb40965a3281cf63b7f4c2b68bd5f3acb28b3d519358a78 not found: ID does not exist" containerID="b57d9ac386e522494eb40965a3281cf63b7f4c2b68bd5f3acb28b3d519358a78" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.635461 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b57d9ac386e522494eb40965a3281cf63b7f4c2b68bd5f3acb28b3d519358a78"} err="failed to get container status \"b57d9ac386e522494eb40965a3281cf63b7f4c2b68bd5f3acb28b3d519358a78\": rpc error: code = NotFound desc = could not find container \"b57d9ac386e522494eb40965a3281cf63b7f4c2b68bd5f3acb28b3d519358a78\": container with ID starting with b57d9ac386e522494eb40965a3281cf63b7f4c2b68bd5f3acb28b3d519358a78 not found: ID does not exist" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.841277 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-m4f8x"] Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.867133 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-z8pdc"] Feb 02 13:17:32 crc kubenswrapper[4955]: E0202 13:17:32.867487 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="decb5fa1-a3bf-46fb-8662-116a233e9dc7" containerName="extract-utilities" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.867508 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="decb5fa1-a3bf-46fb-8662-116a233e9dc7" containerName="extract-utilities" Feb 02 13:17:32 crc kubenswrapper[4955]: E0202 13:17:32.867536 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="decb5fa1-a3bf-46fb-8662-116a233e9dc7" containerName="extract-content" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.867542 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="decb5fa1-a3bf-46fb-8662-116a233e9dc7" containerName="extract-content" Feb 02 13:17:32 crc kubenswrapper[4955]: E0202 13:17:32.867578 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="decb5fa1-a3bf-46fb-8662-116a233e9dc7" containerName="registry-server" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.867585 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="decb5fa1-a3bf-46fb-8662-116a233e9dc7" containerName="registry-server" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.867732 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="decb5fa1-a3bf-46fb-8662-116a233e9dc7" containerName="registry-server" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.868479 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.873793 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.885985 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-z8pdc"] Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.913378 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-zvbpv"] Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.914347 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-zvbpv" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.918960 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 02 13:17:32 crc kubenswrapper[4955]: I0202 13:17:32.931357 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-zvbpv"] Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.017091 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.017137 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.044200 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/020a486c-6f94-48f2-a093-1aec0829a80b-config\") pod \"dnsmasq-dns-74f6f696b9-z8pdc\" (UID: \"020a486c-6f94-48f2-a093-1aec0829a80b\") " pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.044262 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbkcq\" (UniqueName: \"kubernetes.io/projected/6a847a81-83ab-4560-924b-9051e0322672-kube-api-access-mbkcq\") pod \"ovn-controller-metrics-zvbpv\" (UID: \"6a847a81-83ab-4560-924b-9051e0322672\") " pod="openstack/ovn-controller-metrics-zvbpv" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.044326 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6a847a81-83ab-4560-924b-9051e0322672-ovn-rundir\") pod \"ovn-controller-metrics-zvbpv\" (UID: \"6a847a81-83ab-4560-924b-9051e0322672\") " pod="openstack/ovn-controller-metrics-zvbpv" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.044367 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a847a81-83ab-4560-924b-9051e0322672-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zvbpv\" (UID: \"6a847a81-83ab-4560-924b-9051e0322672\") " pod="openstack/ovn-controller-metrics-zvbpv" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.044395 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a847a81-83ab-4560-924b-9051e0322672-config\") pod \"ovn-controller-metrics-zvbpv\" (UID: \"6a847a81-83ab-4560-924b-9051e0322672\") " pod="openstack/ovn-controller-metrics-zvbpv" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.044421 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6a847a81-83ab-4560-924b-9051e0322672-ovs-rundir\") pod \"ovn-controller-metrics-zvbpv\" (UID: \"6a847a81-83ab-4560-924b-9051e0322672\") " pod="openstack/ovn-controller-metrics-zvbpv" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.044477 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvvwt\" (UniqueName: \"kubernetes.io/projected/020a486c-6f94-48f2-a093-1aec0829a80b-kube-api-access-xvvwt\") pod \"dnsmasq-dns-74f6f696b9-z8pdc\" (UID: \"020a486c-6f94-48f2-a093-1aec0829a80b\") " pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.044521 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/020a486c-6f94-48f2-a093-1aec0829a80b-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-z8pdc\" (UID: \"020a486c-6f94-48f2-a093-1aec0829a80b\") " pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.044544 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a847a81-83ab-4560-924b-9051e0322672-combined-ca-bundle\") pod \"ovn-controller-metrics-zvbpv\" (UID: \"6a847a81-83ab-4560-924b-9051e0322672\") " pod="openstack/ovn-controller-metrics-zvbpv" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.044592 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/020a486c-6f94-48f2-a093-1aec0829a80b-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-z8pdc\" (UID: \"020a486c-6f94-48f2-a093-1aec0829a80b\") " pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.121859 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.121907 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.147541 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6a847a81-83ab-4560-924b-9051e0322672-ovn-rundir\") pod \"ovn-controller-metrics-zvbpv\" (UID: \"6a847a81-83ab-4560-924b-9051e0322672\") " pod="openstack/ovn-controller-metrics-zvbpv" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.147620 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a847a81-83ab-4560-924b-9051e0322672-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zvbpv\" (UID: \"6a847a81-83ab-4560-924b-9051e0322672\") " pod="openstack/ovn-controller-metrics-zvbpv" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.147655 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a847a81-83ab-4560-924b-9051e0322672-config\") pod \"ovn-controller-metrics-zvbpv\" (UID: \"6a847a81-83ab-4560-924b-9051e0322672\") " pod="openstack/ovn-controller-metrics-zvbpv" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.147680 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6a847a81-83ab-4560-924b-9051e0322672-ovs-rundir\") pod \"ovn-controller-metrics-zvbpv\" (UID: \"6a847a81-83ab-4560-924b-9051e0322672\") " pod="openstack/ovn-controller-metrics-zvbpv" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.147727 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvvwt\" (UniqueName: \"kubernetes.io/projected/020a486c-6f94-48f2-a093-1aec0829a80b-kube-api-access-xvvwt\") pod \"dnsmasq-dns-74f6f696b9-z8pdc\" (UID: \"020a486c-6f94-48f2-a093-1aec0829a80b\") " pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.147751 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/020a486c-6f94-48f2-a093-1aec0829a80b-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-z8pdc\" (UID: \"020a486c-6f94-48f2-a093-1aec0829a80b\") " pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.147768 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a847a81-83ab-4560-924b-9051e0322672-combined-ca-bundle\") pod \"ovn-controller-metrics-zvbpv\" (UID: \"6a847a81-83ab-4560-924b-9051e0322672\") " pod="openstack/ovn-controller-metrics-zvbpv" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.147788 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/020a486c-6f94-48f2-a093-1aec0829a80b-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-z8pdc\" (UID: \"020a486c-6f94-48f2-a093-1aec0829a80b\") " pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.147814 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/020a486c-6f94-48f2-a093-1aec0829a80b-config\") pod \"dnsmasq-dns-74f6f696b9-z8pdc\" (UID: \"020a486c-6f94-48f2-a093-1aec0829a80b\") " pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.147835 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbkcq\" (UniqueName: \"kubernetes.io/projected/6a847a81-83ab-4560-924b-9051e0322672-kube-api-access-mbkcq\") pod \"ovn-controller-metrics-zvbpv\" (UID: \"6a847a81-83ab-4560-924b-9051e0322672\") " pod="openstack/ovn-controller-metrics-zvbpv" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.148449 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6a847a81-83ab-4560-924b-9051e0322672-ovn-rundir\") pod \"ovn-controller-metrics-zvbpv\" (UID: \"6a847a81-83ab-4560-924b-9051e0322672\") " pod="openstack/ovn-controller-metrics-zvbpv" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.149821 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/020a486c-6f94-48f2-a093-1aec0829a80b-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-z8pdc\" (UID: \"020a486c-6f94-48f2-a093-1aec0829a80b\") " pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.149819 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6a847a81-83ab-4560-924b-9051e0322672-ovs-rundir\") pod \"ovn-controller-metrics-zvbpv\" (UID: \"6a847a81-83ab-4560-924b-9051e0322672\") " pod="openstack/ovn-controller-metrics-zvbpv" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.150421 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a847a81-83ab-4560-924b-9051e0322672-config\") pod \"ovn-controller-metrics-zvbpv\" (UID: \"6a847a81-83ab-4560-924b-9051e0322672\") " pod="openstack/ovn-controller-metrics-zvbpv" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.150935 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/020a486c-6f94-48f2-a093-1aec0829a80b-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-z8pdc\" (UID: \"020a486c-6f94-48f2-a093-1aec0829a80b\") " pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.151721 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/020a486c-6f94-48f2-a093-1aec0829a80b-config\") pod \"dnsmasq-dns-74f6f696b9-z8pdc\" (UID: \"020a486c-6f94-48f2-a093-1aec0829a80b\") " pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.153111 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a847a81-83ab-4560-924b-9051e0322672-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zvbpv\" (UID: \"6a847a81-83ab-4560-924b-9051e0322672\") " pod="openstack/ovn-controller-metrics-zvbpv" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.158077 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a847a81-83ab-4560-924b-9051e0322672-combined-ca-bundle\") pod \"ovn-controller-metrics-zvbpv\" (UID: \"6a847a81-83ab-4560-924b-9051e0322672\") " pod="openstack/ovn-controller-metrics-zvbpv" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.169671 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbkcq\" (UniqueName: \"kubernetes.io/projected/6a847a81-83ab-4560-924b-9051e0322672-kube-api-access-mbkcq\") pod \"ovn-controller-metrics-zvbpv\" (UID: \"6a847a81-83ab-4560-924b-9051e0322672\") " pod="openstack/ovn-controller-metrics-zvbpv" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.182576 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvvwt\" (UniqueName: \"kubernetes.io/projected/020a486c-6f94-48f2-a093-1aec0829a80b-kube-api-access-xvvwt\") pod \"dnsmasq-dns-74f6f696b9-z8pdc\" (UID: \"020a486c-6f94-48f2-a093-1aec0829a80b\") " pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.189282 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.234877 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-zvbpv" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.241413 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-b7qd7"] Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.266756 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-fw7r5"] Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.281844 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-fw7r5" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.297863 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.320032 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.330686 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-fw7r5"] Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.352862 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-dns-svc\") pod \"dnsmasq-dns-698758b865-fw7r5\" (UID: \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\") " pod="openstack/dnsmasq-dns-698758b865-fw7r5" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.352911 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4hfz\" (UniqueName: \"kubernetes.io/projected/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-kube-api-access-n4hfz\") pod \"dnsmasq-dns-698758b865-fw7r5\" (UID: \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\") " pod="openstack/dnsmasq-dns-698758b865-fw7r5" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.352981 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-config\") pod \"dnsmasq-dns-698758b865-fw7r5\" (UID: \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\") " pod="openstack/dnsmasq-dns-698758b865-fw7r5" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.353049 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-fw7r5\" (UID: \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\") " pod="openstack/dnsmasq-dns-698758b865-fw7r5" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.353085 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-fw7r5\" (UID: \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\") " pod="openstack/dnsmasq-dns-698758b865-fw7r5" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.454591 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-dns-svc\") pod \"dnsmasq-dns-698758b865-fw7r5\" (UID: \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\") " pod="openstack/dnsmasq-dns-698758b865-fw7r5" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.454837 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4hfz\" (UniqueName: \"kubernetes.io/projected/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-kube-api-access-n4hfz\") pod \"dnsmasq-dns-698758b865-fw7r5\" (UID: \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\") " pod="openstack/dnsmasq-dns-698758b865-fw7r5" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.454893 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-config\") pod \"dnsmasq-dns-698758b865-fw7r5\" (UID: \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\") " pod="openstack/dnsmasq-dns-698758b865-fw7r5" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.454931 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-fw7r5\" (UID: \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\") " pod="openstack/dnsmasq-dns-698758b865-fw7r5" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.454959 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-fw7r5\" (UID: \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\") " pod="openstack/dnsmasq-dns-698758b865-fw7r5" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.455402 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-dns-svc\") pod \"dnsmasq-dns-698758b865-fw7r5\" (UID: \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\") " pod="openstack/dnsmasq-dns-698758b865-fw7r5" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.455781 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-fw7r5\" (UID: \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\") " pod="openstack/dnsmasq-dns-698758b865-fw7r5" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.456107 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-fw7r5\" (UID: \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\") " pod="openstack/dnsmasq-dns-698758b865-fw7r5" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.456319 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-config\") pod \"dnsmasq-dns-698758b865-fw7r5\" (UID: \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\") " pod="openstack/dnsmasq-dns-698758b865-fw7r5" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.480081 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4hfz\" (UniqueName: \"kubernetes.io/projected/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-kube-api-access-n4hfz\") pod \"dnsmasq-dns-698758b865-fw7r5\" (UID: \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\") " pod="openstack/dnsmasq-dns-698758b865-fw7r5" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.546138 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qkf4x" event={"ID":"eda30c6d-d71c-417b-8434-ff87281d64c7","Type":"ContainerStarted","Data":"05925dad241995515d85ab8b29c7ee2d47f0221f23c161bc224361f9d07492de"} Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.579342 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qkf4x" podStartSLOduration=16.093213769 podStartE2EDuration="26.579296946s" podCreationTimestamp="2026-02-02 13:17:07 +0000 UTC" firstStartedPulling="2026-02-02 13:17:21.44940905 +0000 UTC m=+892.361745500" lastFinishedPulling="2026-02-02 13:17:31.935492227 +0000 UTC m=+902.847828677" observedRunningTime="2026-02-02 13:17:33.570304949 +0000 UTC m=+904.482641409" watchObservedRunningTime="2026-02-02 13:17:33.579296946 +0000 UTC m=+904.491633396" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.636332 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.656386 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-fw7r5" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.737655 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="decb5fa1-a3bf-46fb-8662-116a233e9dc7" path="/var/lib/kubelet/pods/decb5fa1-a3bf-46fb-8662-116a233e9dc7/volumes" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.919299 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.919630 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:33 crc kubenswrapper[4955]: I0202 13:17:33.968310 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.354431 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-m4f8x" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.367680 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-bx69p"] Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.368666 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bx69p" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.380238 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3703-account-create-update-7pxx8"] Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.381438 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3703-account-create-update-7pxx8" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.386684 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.409226 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bx69p"] Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.416844 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3703-account-create-update-7pxx8"] Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.479744 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a56fa80-19ba-40de-a107-32577f31ed7a-config\") pod \"5a56fa80-19ba-40de-a107-32577f31ed7a\" (UID: \"5a56fa80-19ba-40de-a107-32577f31ed7a\") " Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.479920 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bwwz\" (UniqueName: \"kubernetes.io/projected/5a56fa80-19ba-40de-a107-32577f31ed7a-kube-api-access-4bwwz\") pod \"5a56fa80-19ba-40de-a107-32577f31ed7a\" (UID: \"5a56fa80-19ba-40de-a107-32577f31ed7a\") " Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.480054 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a56fa80-19ba-40de-a107-32577f31ed7a-dns-svc\") pod \"5a56fa80-19ba-40de-a107-32577f31ed7a\" (UID: \"5a56fa80-19ba-40de-a107-32577f31ed7a\") " Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.480286 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52-operator-scripts\") pod \"keystone-db-create-bx69p\" (UID: \"abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52\") " pod="openstack/keystone-db-create-bx69p" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.480377 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8whx\" (UniqueName: \"kubernetes.io/projected/416b4256-31d6-4455-a31e-62c0a1f3d5fe-kube-api-access-q8whx\") pod \"keystone-3703-account-create-update-7pxx8\" (UID: \"416b4256-31d6-4455-a31e-62c0a1f3d5fe\") " pod="openstack/keystone-3703-account-create-update-7pxx8" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.480419 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fngb\" (UniqueName: \"kubernetes.io/projected/abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52-kube-api-access-9fngb\") pod \"keystone-db-create-bx69p\" (UID: \"abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52\") " pod="openstack/keystone-db-create-bx69p" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.480439 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/416b4256-31d6-4455-a31e-62c0a1f3d5fe-operator-scripts\") pod \"keystone-3703-account-create-update-7pxx8\" (UID: \"416b4256-31d6-4455-a31e-62c0a1f3d5fe\") " pod="openstack/keystone-3703-account-create-update-7pxx8" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.481023 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a56fa80-19ba-40de-a107-32577f31ed7a-config" (OuterVolumeSpecName: "config") pod "5a56fa80-19ba-40de-a107-32577f31ed7a" (UID: "5a56fa80-19ba-40de-a107-32577f31ed7a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.481197 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a56fa80-19ba-40de-a107-32577f31ed7a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5a56fa80-19ba-40de-a107-32577f31ed7a" (UID: "5a56fa80-19ba-40de-a107-32577f31ed7a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.487415 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a56fa80-19ba-40de-a107-32577f31ed7a-kube-api-access-4bwwz" (OuterVolumeSpecName: "kube-api-access-4bwwz") pod "5a56fa80-19ba-40de-a107-32577f31ed7a" (UID: "5a56fa80-19ba-40de-a107-32577f31ed7a"). InnerVolumeSpecName "kube-api-access-4bwwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.555473 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-m4f8x" event={"ID":"5a56fa80-19ba-40de-a107-32577f31ed7a","Type":"ContainerDied","Data":"6678b7aa7f0d35ab3fda2192b68c649f79b9d9099fc1ffa7e1c842ae89af991d"} Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.555575 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-m4f8x" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.556365 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-b7qd7" podUID="fc261114-ebe8-444a-aaaf-517260085546" containerName="dnsmasq-dns" containerID="cri-o://56057c403cea4f101a0af4b046c5af2a04c05bb0b5fbb194855c52bd83e93dc2" gracePeriod=10 Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.582512 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fngb\" (UniqueName: \"kubernetes.io/projected/abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52-kube-api-access-9fngb\") pod \"keystone-db-create-bx69p\" (UID: \"abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52\") " pod="openstack/keystone-db-create-bx69p" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.582577 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/416b4256-31d6-4455-a31e-62c0a1f3d5fe-operator-scripts\") pod \"keystone-3703-account-create-update-7pxx8\" (UID: \"416b4256-31d6-4455-a31e-62c0a1f3d5fe\") " pod="openstack/keystone-3703-account-create-update-7pxx8" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.582656 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52-operator-scripts\") pod \"keystone-db-create-bx69p\" (UID: \"abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52\") " pod="openstack/keystone-db-create-bx69p" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.582723 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8whx\" (UniqueName: \"kubernetes.io/projected/416b4256-31d6-4455-a31e-62c0a1f3d5fe-kube-api-access-q8whx\") pod \"keystone-3703-account-create-update-7pxx8\" (UID: \"416b4256-31d6-4455-a31e-62c0a1f3d5fe\") " pod="openstack/keystone-3703-account-create-update-7pxx8" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.582777 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bwwz\" (UniqueName: \"kubernetes.io/projected/5a56fa80-19ba-40de-a107-32577f31ed7a-kube-api-access-4bwwz\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.582788 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a56fa80-19ba-40de-a107-32577f31ed7a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.582797 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a56fa80-19ba-40de-a107-32577f31ed7a-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.583695 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52-operator-scripts\") pod \"keystone-db-create-bx69p\" (UID: \"abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52\") " pod="openstack/keystone-db-create-bx69p" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.583745 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/416b4256-31d6-4455-a31e-62c0a1f3d5fe-operator-scripts\") pod \"keystone-3703-account-create-update-7pxx8\" (UID: \"416b4256-31d6-4455-a31e-62c0a1f3d5fe\") " pod="openstack/keystone-3703-account-create-update-7pxx8" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.608209 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fngb\" (UniqueName: \"kubernetes.io/projected/abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52-kube-api-access-9fngb\") pod \"keystone-db-create-bx69p\" (UID: \"abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52\") " pod="openstack/keystone-db-create-bx69p" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.616936 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8whx\" (UniqueName: \"kubernetes.io/projected/416b4256-31d6-4455-a31e-62c0a1f3d5fe-kube-api-access-q8whx\") pod \"keystone-3703-account-create-update-7pxx8\" (UID: \"416b4256-31d6-4455-a31e-62c0a1f3d5fe\") " pod="openstack/keystone-3703-account-create-update-7pxx8" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.671488 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-m4f8x"] Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.684059 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-m4f8x"] Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.689073 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.692766 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bx69p" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.708682 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3703-account-create-update-7pxx8" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.759628 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-9mhp4"] Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.790611 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.790644 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.791426 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9mhp4" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.813588 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9mhp4"] Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.837048 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-4e0e-account-create-update-qks52"] Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.838311 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4e0e-account-create-update-qks52" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.840486 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.885451 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4e0e-account-create-update-qks52"] Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.897878 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3b79866-94e2-413c-b444-4af683c5095e-operator-scripts\") pod \"placement-db-create-9mhp4\" (UID: \"b3b79866-94e2-413c-b444-4af683c5095e\") " pod="openstack/placement-db-create-9mhp4" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.898062 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e7279b9-2901-426c-a100-7390d81ae95b-operator-scripts\") pod \"placement-4e0e-account-create-update-qks52\" (UID: \"6e7279b9-2901-426c-a100-7390d81ae95b\") " pod="openstack/placement-4e0e-account-create-update-qks52" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.898095 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hncrh\" (UniqueName: \"kubernetes.io/projected/b3b79866-94e2-413c-b444-4af683c5095e-kube-api-access-hncrh\") pod \"placement-db-create-9mhp4\" (UID: \"b3b79866-94e2-413c-b444-4af683c5095e\") " pod="openstack/placement-db-create-9mhp4" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.898154 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl4z6\" (UniqueName: \"kubernetes.io/projected/6e7279b9-2901-426c-a100-7390d81ae95b-kube-api-access-fl4z6\") pod \"placement-4e0e-account-create-update-qks52\" (UID: \"6e7279b9-2901-426c-a100-7390d81ae95b\") " pod="openstack/placement-4e0e-account-create-update-qks52" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.959802 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.973109 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-vrgtc"] Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.974970 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vrgtc" Feb 02 13:17:34 crc kubenswrapper[4955]: I0202 13:17:34.981109 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vrgtc"] Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.006639 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e7279b9-2901-426c-a100-7390d81ae95b-operator-scripts\") pod \"placement-4e0e-account-create-update-qks52\" (UID: \"6e7279b9-2901-426c-a100-7390d81ae95b\") " pod="openstack/placement-4e0e-account-create-update-qks52" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.006774 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hncrh\" (UniqueName: \"kubernetes.io/projected/b3b79866-94e2-413c-b444-4af683c5095e-kube-api-access-hncrh\") pod \"placement-db-create-9mhp4\" (UID: \"b3b79866-94e2-413c-b444-4af683c5095e\") " pod="openstack/placement-db-create-9mhp4" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.006866 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl4z6\" (UniqueName: \"kubernetes.io/projected/6e7279b9-2901-426c-a100-7390d81ae95b-kube-api-access-fl4z6\") pod \"placement-4e0e-account-create-update-qks52\" (UID: \"6e7279b9-2901-426c-a100-7390d81ae95b\") " pod="openstack/placement-4e0e-account-create-update-qks52" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.006961 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3b79866-94e2-413c-b444-4af683c5095e-operator-scripts\") pod \"placement-db-create-9mhp4\" (UID: \"b3b79866-94e2-413c-b444-4af683c5095e\") " pod="openstack/placement-db-create-9mhp4" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.007513 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e7279b9-2901-426c-a100-7390d81ae95b-operator-scripts\") pod \"placement-4e0e-account-create-update-qks52\" (UID: \"6e7279b9-2901-426c-a100-7390d81ae95b\") " pod="openstack/placement-4e0e-account-create-update-qks52" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.008047 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3b79866-94e2-413c-b444-4af683c5095e-operator-scripts\") pod \"placement-db-create-9mhp4\" (UID: \"b3b79866-94e2-413c-b444-4af683c5095e\") " pod="openstack/placement-db-create-9mhp4" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.028789 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl4z6\" (UniqueName: \"kubernetes.io/projected/6e7279b9-2901-426c-a100-7390d81ae95b-kube-api-access-fl4z6\") pod \"placement-4e0e-account-create-update-qks52\" (UID: \"6e7279b9-2901-426c-a100-7390d81ae95b\") " pod="openstack/placement-4e0e-account-create-update-qks52" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.034501 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hncrh\" (UniqueName: \"kubernetes.io/projected/b3b79866-94e2-413c-b444-4af683c5095e-kube-api-access-hncrh\") pod \"placement-db-create-9mhp4\" (UID: \"b3b79866-94e2-413c-b444-4af683c5095e\") " pod="openstack/placement-db-create-9mhp4" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.074623 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ef11-account-create-update-dvm84"] Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.077228 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ef11-account-create-update-dvm84" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.081794 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.101492 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ef11-account-create-update-dvm84"] Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.109643 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4222fa7-e71f-4d91-9e0d-ef369046f6a0-operator-scripts\") pod \"glance-db-create-vrgtc\" (UID: \"d4222fa7-e71f-4d91-9e0d-ef369046f6a0\") " pod="openstack/glance-db-create-vrgtc" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.109796 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdn69\" (UniqueName: \"kubernetes.io/projected/d4222fa7-e71f-4d91-9e0d-ef369046f6a0-kube-api-access-cdn69\") pod \"glance-db-create-vrgtc\" (UID: \"d4222fa7-e71f-4d91-9e0d-ef369046f6a0\") " pod="openstack/glance-db-create-vrgtc" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.121760 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.124447 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.126564 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.127027 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.127304 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-kw5pl" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.127771 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.130127 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.160203 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9mhp4" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.167705 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4e0e-account-create-update-qks52" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.212253 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbbc048-2495-466f-9649-8e95698e29d8-config\") pod \"ovn-northd-0\" (UID: \"6cbbc048-2495-466f-9649-8e95698e29d8\") " pod="openstack/ovn-northd-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.212348 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7vgg\" (UniqueName: \"kubernetes.io/projected/6cbbc048-2495-466f-9649-8e95698e29d8-kube-api-access-n7vgg\") pod \"ovn-northd-0\" (UID: \"6cbbc048-2495-466f-9649-8e95698e29d8\") " pod="openstack/ovn-northd-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.212369 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km5xc\" (UniqueName: \"kubernetes.io/projected/c4f73bc6-3700-4cbe-9e5a-7a95c596c039-kube-api-access-km5xc\") pod \"glance-ef11-account-create-update-dvm84\" (UID: \"c4f73bc6-3700-4cbe-9e5a-7a95c596c039\") " pod="openstack/glance-ef11-account-create-update-dvm84" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.212418 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdn69\" (UniqueName: \"kubernetes.io/projected/d4222fa7-e71f-4d91-9e0d-ef369046f6a0-kube-api-access-cdn69\") pod \"glance-db-create-vrgtc\" (UID: \"d4222fa7-e71f-4d91-9e0d-ef369046f6a0\") " pod="openstack/glance-db-create-vrgtc" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.212491 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f73bc6-3700-4cbe-9e5a-7a95c596c039-operator-scripts\") pod \"glance-ef11-account-create-update-dvm84\" (UID: \"c4f73bc6-3700-4cbe-9e5a-7a95c596c039\") " pod="openstack/glance-ef11-account-create-update-dvm84" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.212541 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cbbc048-2495-466f-9649-8e95698e29d8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6cbbc048-2495-466f-9649-8e95698e29d8\") " pod="openstack/ovn-northd-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.212585 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6cbbc048-2495-466f-9649-8e95698e29d8-scripts\") pod \"ovn-northd-0\" (UID: \"6cbbc048-2495-466f-9649-8e95698e29d8\") " pod="openstack/ovn-northd-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.212606 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6cbbc048-2495-466f-9649-8e95698e29d8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6cbbc048-2495-466f-9649-8e95698e29d8\") " pod="openstack/ovn-northd-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.212626 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4222fa7-e71f-4d91-9e0d-ef369046f6a0-operator-scripts\") pod \"glance-db-create-vrgtc\" (UID: \"d4222fa7-e71f-4d91-9e0d-ef369046f6a0\") " pod="openstack/glance-db-create-vrgtc" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.212641 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cbbc048-2495-466f-9649-8e95698e29d8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6cbbc048-2495-466f-9649-8e95698e29d8\") " pod="openstack/ovn-northd-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.212672 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cbbc048-2495-466f-9649-8e95698e29d8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6cbbc048-2495-466f-9649-8e95698e29d8\") " pod="openstack/ovn-northd-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.213430 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4222fa7-e71f-4d91-9e0d-ef369046f6a0-operator-scripts\") pod \"glance-db-create-vrgtc\" (UID: \"d4222fa7-e71f-4d91-9e0d-ef369046f6a0\") " pod="openstack/glance-db-create-vrgtc" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.229646 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdn69\" (UniqueName: \"kubernetes.io/projected/d4222fa7-e71f-4d91-9e0d-ef369046f6a0-kube-api-access-cdn69\") pod \"glance-db-create-vrgtc\" (UID: \"d4222fa7-e71f-4d91-9e0d-ef369046f6a0\") " pod="openstack/glance-db-create-vrgtc" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.290934 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vrgtc" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.313623 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f73bc6-3700-4cbe-9e5a-7a95c596c039-operator-scripts\") pod \"glance-ef11-account-create-update-dvm84\" (UID: \"c4f73bc6-3700-4cbe-9e5a-7a95c596c039\") " pod="openstack/glance-ef11-account-create-update-dvm84" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.313711 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cbbc048-2495-466f-9649-8e95698e29d8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6cbbc048-2495-466f-9649-8e95698e29d8\") " pod="openstack/ovn-northd-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.313754 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6cbbc048-2495-466f-9649-8e95698e29d8-scripts\") pod \"ovn-northd-0\" (UID: \"6cbbc048-2495-466f-9649-8e95698e29d8\") " pod="openstack/ovn-northd-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.313800 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6cbbc048-2495-466f-9649-8e95698e29d8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6cbbc048-2495-466f-9649-8e95698e29d8\") " pod="openstack/ovn-northd-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.313829 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cbbc048-2495-466f-9649-8e95698e29d8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6cbbc048-2495-466f-9649-8e95698e29d8\") " pod="openstack/ovn-northd-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.313869 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cbbc048-2495-466f-9649-8e95698e29d8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6cbbc048-2495-466f-9649-8e95698e29d8\") " pod="openstack/ovn-northd-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.313907 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbbc048-2495-466f-9649-8e95698e29d8-config\") pod \"ovn-northd-0\" (UID: \"6cbbc048-2495-466f-9649-8e95698e29d8\") " pod="openstack/ovn-northd-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.313937 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7vgg\" (UniqueName: \"kubernetes.io/projected/6cbbc048-2495-466f-9649-8e95698e29d8-kube-api-access-n7vgg\") pod \"ovn-northd-0\" (UID: \"6cbbc048-2495-466f-9649-8e95698e29d8\") " pod="openstack/ovn-northd-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.313967 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km5xc\" (UniqueName: \"kubernetes.io/projected/c4f73bc6-3700-4cbe-9e5a-7a95c596c039-kube-api-access-km5xc\") pod \"glance-ef11-account-create-update-dvm84\" (UID: \"c4f73bc6-3700-4cbe-9e5a-7a95c596c039\") " pod="openstack/glance-ef11-account-create-update-dvm84" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.314523 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f73bc6-3700-4cbe-9e5a-7a95c596c039-operator-scripts\") pod \"glance-ef11-account-create-update-dvm84\" (UID: \"c4f73bc6-3700-4cbe-9e5a-7a95c596c039\") " pod="openstack/glance-ef11-account-create-update-dvm84" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.315452 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6cbbc048-2495-466f-9649-8e95698e29d8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6cbbc048-2495-466f-9649-8e95698e29d8\") " pod="openstack/ovn-northd-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.315932 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbbc048-2495-466f-9649-8e95698e29d8-config\") pod \"ovn-northd-0\" (UID: \"6cbbc048-2495-466f-9649-8e95698e29d8\") " pod="openstack/ovn-northd-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.316320 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6cbbc048-2495-466f-9649-8e95698e29d8-scripts\") pod \"ovn-northd-0\" (UID: \"6cbbc048-2495-466f-9649-8e95698e29d8\") " pod="openstack/ovn-northd-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.319039 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cbbc048-2495-466f-9649-8e95698e29d8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6cbbc048-2495-466f-9649-8e95698e29d8\") " pod="openstack/ovn-northd-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.324543 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cbbc048-2495-466f-9649-8e95698e29d8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6cbbc048-2495-466f-9649-8e95698e29d8\") " pod="openstack/ovn-northd-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.325284 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cbbc048-2495-466f-9649-8e95698e29d8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6cbbc048-2495-466f-9649-8e95698e29d8\") " pod="openstack/ovn-northd-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.329497 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km5xc\" (UniqueName: \"kubernetes.io/projected/c4f73bc6-3700-4cbe-9e5a-7a95c596c039-kube-api-access-km5xc\") pod \"glance-ef11-account-create-update-dvm84\" (UID: \"c4f73bc6-3700-4cbe-9e5a-7a95c596c039\") " pod="openstack/glance-ef11-account-create-update-dvm84" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.330472 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7vgg\" (UniqueName: \"kubernetes.io/projected/6cbbc048-2495-466f-9649-8e95698e29d8-kube-api-access-n7vgg\") pod \"ovn-northd-0\" (UID: \"6cbbc048-2495-466f-9649-8e95698e29d8\") " pod="openstack/ovn-northd-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.419193 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ef11-account-create-update-dvm84" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.443450 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.563331 4955 generic.go:334] "Generic (PLEG): container finished" podID="fc261114-ebe8-444a-aaaf-517260085546" containerID="56057c403cea4f101a0af4b046c5af2a04c05bb0b5fbb194855c52bd83e93dc2" exitCode=0 Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.564133 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-b7qd7" event={"ID":"fc261114-ebe8-444a-aaaf-517260085546","Type":"ContainerDied","Data":"56057c403cea4f101a0af4b046c5af2a04c05bb0b5fbb194855c52bd83e93dc2"} Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.636414 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.727913 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a56fa80-19ba-40de-a107-32577f31ed7a" path="/var/lib/kubelet/pods/5a56fa80-19ba-40de-a107-32577f31ed7a/volumes" Feb 02 13:17:35 crc kubenswrapper[4955]: I0202 13:17:35.929489 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-etc-swift\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:17:35 crc kubenswrapper[4955]: E0202 13:17:35.929998 4955 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 13:17:35 crc kubenswrapper[4955]: E0202 13:17:35.930026 4955 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 13:17:35 crc kubenswrapper[4955]: E0202 13:17:35.930076 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-etc-swift podName:ec1a6503-248d-4f72-a3ab-e23df2ca163d nodeName:}" failed. No retries permitted until 2026-02-02 13:17:43.930060149 +0000 UTC m=+914.842396599 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-etc-swift") pod "swift-storage-0" (UID: "ec1a6503-248d-4f72-a3ab-e23df2ca163d") : configmap "swift-ring-files" not found Feb 02 13:17:36 crc kubenswrapper[4955]: I0202 13:17:36.479375 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-b7qd7" Feb 02 13:17:36 crc kubenswrapper[4955]: I0202 13:17:36.555871 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc261114-ebe8-444a-aaaf-517260085546-config\") pod \"fc261114-ebe8-444a-aaaf-517260085546\" (UID: \"fc261114-ebe8-444a-aaaf-517260085546\") " Feb 02 13:17:36 crc kubenswrapper[4955]: I0202 13:17:36.556266 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc261114-ebe8-444a-aaaf-517260085546-dns-svc\") pod \"fc261114-ebe8-444a-aaaf-517260085546\" (UID: \"fc261114-ebe8-444a-aaaf-517260085546\") " Feb 02 13:17:36 crc kubenswrapper[4955]: I0202 13:17:36.556343 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnkdr\" (UniqueName: \"kubernetes.io/projected/fc261114-ebe8-444a-aaaf-517260085546-kube-api-access-vnkdr\") pod \"fc261114-ebe8-444a-aaaf-517260085546\" (UID: \"fc261114-ebe8-444a-aaaf-517260085546\") " Feb 02 13:17:36 crc kubenswrapper[4955]: I0202 13:17:36.560507 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc261114-ebe8-444a-aaaf-517260085546-kube-api-access-vnkdr" (OuterVolumeSpecName: "kube-api-access-vnkdr") pod "fc261114-ebe8-444a-aaaf-517260085546" (UID: "fc261114-ebe8-444a-aaaf-517260085546"). InnerVolumeSpecName "kube-api-access-vnkdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:17:36 crc kubenswrapper[4955]: I0202 13:17:36.607098 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-b7qd7" event={"ID":"fc261114-ebe8-444a-aaaf-517260085546","Type":"ContainerDied","Data":"904a2922949859ab1be7f2f7e56b6a6b3dfde3d160789649e15dd9faa199ec00"} Feb 02 13:17:36 crc kubenswrapper[4955]: I0202 13:17:36.607153 4955 scope.go:117] "RemoveContainer" containerID="56057c403cea4f101a0af4b046c5af2a04c05bb0b5fbb194855c52bd83e93dc2" Feb 02 13:17:36 crc kubenswrapper[4955]: I0202 13:17:36.608685 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-b7qd7" Feb 02 13:17:36 crc kubenswrapper[4955]: I0202 13:17:36.619663 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc261114-ebe8-444a-aaaf-517260085546-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fc261114-ebe8-444a-aaaf-517260085546" (UID: "fc261114-ebe8-444a-aaaf-517260085546"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:36 crc kubenswrapper[4955]: I0202 13:17:36.654161 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-z8pdc"] Feb 02 13:17:36 crc kubenswrapper[4955]: I0202 13:17:36.662845 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc261114-ebe8-444a-aaaf-517260085546-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:36 crc kubenswrapper[4955]: I0202 13:17:36.662876 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnkdr\" (UniqueName: \"kubernetes.io/projected/fc261114-ebe8-444a-aaaf-517260085546-kube-api-access-vnkdr\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:36 crc kubenswrapper[4955]: I0202 13:17:36.719972 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc261114-ebe8-444a-aaaf-517260085546-config" (OuterVolumeSpecName: "config") pod "fc261114-ebe8-444a-aaaf-517260085546" (UID: "fc261114-ebe8-444a-aaaf-517260085546"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:36 crc kubenswrapper[4955]: I0202 13:17:36.765092 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc261114-ebe8-444a-aaaf-517260085546-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:36 crc kubenswrapper[4955]: I0202 13:17:36.804327 4955 scope.go:117] "RemoveContainer" containerID="4f570f8ff60d70defc629ab252cdb8f5c352698eecd216d3e91fc263a288af52" Feb 02 13:17:36 crc kubenswrapper[4955]: I0202 13:17:36.948817 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 13:17:36 crc kubenswrapper[4955]: I0202 13:17:36.974982 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-fw7r5"] Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.084112 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-b7qd7"] Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.090175 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-b7qd7"] Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.278869 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4e0e-account-create-update-qks52"] Feb 02 13:17:37 crc kubenswrapper[4955]: W0202 13:17:37.298847 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabf14f31_bd26_4fb3_8cdb_b3a7cdfd0c52.slice/crio-e998704214464f539ef3d46afcc9e4f3d684a62e5d1fa4e06bbea5b68ba28504 WatchSource:0}: Error finding container e998704214464f539ef3d46afcc9e4f3d684a62e5d1fa4e06bbea5b68ba28504: Status 404 returned error can't find the container with id e998704214464f539ef3d46afcc9e4f3d684a62e5d1fa4e06bbea5b68ba28504 Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.320495 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-zvbpv"] Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.330930 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bx69p"] Feb 02 13:17:37 crc kubenswrapper[4955]: W0202 13:17:37.329752 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod416b4256_31d6_4455_a31e_62c0a1f3d5fe.slice/crio-78c52c9ee883f122cda755ce41ff5d448b9374f8f3a12c9f4e7ee0b874fb112e WatchSource:0}: Error finding container 78c52c9ee883f122cda755ce41ff5d448b9374f8f3a12c9f4e7ee0b874fb112e: Status 404 returned error can't find the container with id 78c52c9ee883f122cda755ce41ff5d448b9374f8f3a12c9f4e7ee0b874fb112e Feb 02 13:17:37 crc kubenswrapper[4955]: W0202 13:17:37.336355 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4f73bc6_3700_4cbe_9e5a_7a95c596c039.slice/crio-36d2a9c07fd3b7f8e57321f3b9821fe1ea562cebaa9bd8709d1c1a6859df28b2 WatchSource:0}: Error finding container 36d2a9c07fd3b7f8e57321f3b9821fe1ea562cebaa9bd8709d1c1a6859df28b2: Status 404 returned error can't find the container with id 36d2a9c07fd3b7f8e57321f3b9821fe1ea562cebaa9bd8709d1c1a6859df28b2 Feb 02 13:17:37 crc kubenswrapper[4955]: W0202 13:17:37.346854 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4222fa7_e71f_4d91_9e0d_ef369046f6a0.slice/crio-3c1f3af6f3695cbadf8e9e66d682ab438e70e6492e5694a1bdd482c3ab528695 WatchSource:0}: Error finding container 3c1f3af6f3695cbadf8e9e66d682ab438e70e6492e5694a1bdd482c3ab528695: Status 404 returned error can't find the container with id 3c1f3af6f3695cbadf8e9e66d682ab438e70e6492e5694a1bdd482c3ab528695 Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.348646 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3703-account-create-update-7pxx8"] Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.361406 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vrgtc"] Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.367252 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ef11-account-create-update-dvm84"] Feb 02 13:17:37 crc kubenswrapper[4955]: W0202 13:17:37.373058 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cbbc048_2495_466f_9649_8e95698e29d8.slice/crio-9e844c7b607136f6b42c3f62f428bb6ececf22606d7feb748d6068a63a5a4bad WatchSource:0}: Error finding container 9e844c7b607136f6b42c3f62f428bb6ececf22606d7feb748d6068a63a5a4bad: Status 404 returned error can't find the container with id 9e844c7b607136f6b42c3f62f428bb6ececf22606d7feb748d6068a63a5a4bad Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.375654 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9mhp4"] Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.382230 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.569712 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qkf4x" Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.570045 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qkf4x" Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.616635 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qkf4x" Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.618078 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9mhp4" event={"ID":"b3b79866-94e2-413c-b444-4af683c5095e","Type":"ContainerStarted","Data":"d85a9e4708e21ae3a94de4ae998866493c5b5e1af3f8896b6c7457d97c6c9eb3"} Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.620688 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3703-account-create-update-7pxx8" event={"ID":"416b4256-31d6-4455-a31e-62c0a1f3d5fe","Type":"ContainerStarted","Data":"78c52c9ee883f122cda755ce41ff5d448b9374f8f3a12c9f4e7ee0b874fb112e"} Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.623336 4955 generic.go:334] "Generic (PLEG): container finished" podID="735a40d7-b7b4-461b-b99f-6557672748e7" containerID="ad346f7a213e411c8bddced5354baf867b3d1a3b21454c065931cb64b75d6d33" exitCode=0 Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.623402 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2g8w" event={"ID":"735a40d7-b7b4-461b-b99f-6557672748e7","Type":"ContainerDied","Data":"ad346f7a213e411c8bddced5354baf867b3d1a3b21454c065931cb64b75d6d33"} Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.629264 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vrgtc" event={"ID":"d4222fa7-e71f-4d91-9e0d-ef369046f6a0","Type":"ContainerStarted","Data":"3c1f3af6f3695cbadf8e9e66d682ab438e70e6492e5694a1bdd482c3ab528695"} Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.631207 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ef11-account-create-update-dvm84" event={"ID":"c4f73bc6-3700-4cbe-9e5a-7a95c596c039","Type":"ContainerStarted","Data":"36d2a9c07fd3b7f8e57321f3b9821fe1ea562cebaa9bd8709d1c1a6859df28b2"} Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.641001 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6cbbc048-2495-466f-9649-8e95698e29d8","Type":"ContainerStarted","Data":"9e844c7b607136f6b42c3f62f428bb6ececf22606d7feb748d6068a63a5a4bad"} Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.649863 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bx69p" event={"ID":"abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52","Type":"ContainerStarted","Data":"e998704214464f539ef3d46afcc9e4f3d684a62e5d1fa4e06bbea5b68ba28504"} Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.651720 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-fw7r5" event={"ID":"2486e70c-d87b-4e5c-bb3b-19d55ebbf622","Type":"ContainerStarted","Data":"d2d35c3a2a667ef36b8a7f2851868cac3907a783b0d884149598626c37a8730e"} Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.655263 4955 generic.go:334] "Generic (PLEG): container finished" podID="020a486c-6f94-48f2-a093-1aec0829a80b" containerID="d4ce0fc5642a343c893c56cb75eb87477241200afd2bdf8ba50994ce832f5d2a" exitCode=0 Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.655341 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" event={"ID":"020a486c-6f94-48f2-a093-1aec0829a80b","Type":"ContainerDied","Data":"d4ce0fc5642a343c893c56cb75eb87477241200afd2bdf8ba50994ce832f5d2a"} Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.655367 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" event={"ID":"020a486c-6f94-48f2-a093-1aec0829a80b","Type":"ContainerStarted","Data":"298f8e5504e953fddd6c240c688f733dfc62fa0a0697366045ccd2c5bb123150"} Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.657252 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-zvbpv" event={"ID":"6a847a81-83ab-4560-924b-9051e0322672","Type":"ContainerStarted","Data":"714a8b48d2c585976e8e40af01e9ec23a850d8e921f86b743ad479e51def906a"} Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.660927 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dv4r7" event={"ID":"b3cb6dc2-d198-405c-816a-dd3ddb578ed4","Type":"ContainerStarted","Data":"3286c72e03c4d05065b4c4a534f756faec721dce38d570b66188531ade9f2ab2"} Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.662879 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4e0e-account-create-update-qks52" event={"ID":"6e7279b9-2901-426c-a100-7390d81ae95b","Type":"ContainerStarted","Data":"b12f8a8a03bc46c725b96ac23450aa1e40fbdf21a6126ce000cef1b6888abeac"} Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.696188 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-dv4r7" podStartSLOduration=3.993448817 podStartE2EDuration="9.696168362s" podCreationTimestamp="2026-02-02 13:17:28 +0000 UTC" firstStartedPulling="2026-02-02 13:17:30.641091026 +0000 UTC m=+901.553427476" lastFinishedPulling="2026-02-02 13:17:36.343810571 +0000 UTC m=+907.256147021" observedRunningTime="2026-02-02 13:17:37.693765575 +0000 UTC m=+908.606102045" watchObservedRunningTime="2026-02-02 13:17:37.696168362 +0000 UTC m=+908.608504812" Feb 02 13:17:37 crc kubenswrapper[4955]: I0202 13:17:37.729604 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc261114-ebe8-444a-aaaf-517260085546" path="/var/lib/kubelet/pods/fc261114-ebe8-444a-aaaf-517260085546/volumes" Feb 02 13:17:38 crc kubenswrapper[4955]: I0202 13:17:38.694209 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-fw7r5" event={"ID":"2486e70c-d87b-4e5c-bb3b-19d55ebbf622","Type":"ContainerStarted","Data":"156cc03f6970af3fc82db1fcf3453e7d77217661c2b831ef64225214ce8e14ea"} Feb 02 13:17:41 crc kubenswrapper[4955]: I0202 13:17:41.697989 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-v8fd4"] Feb 02 13:17:41 crc kubenswrapper[4955]: E0202 13:17:41.698708 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc261114-ebe8-444a-aaaf-517260085546" containerName="dnsmasq-dns" Feb 02 13:17:41 crc kubenswrapper[4955]: I0202 13:17:41.698725 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc261114-ebe8-444a-aaaf-517260085546" containerName="dnsmasq-dns" Feb 02 13:17:41 crc kubenswrapper[4955]: E0202 13:17:41.698752 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc261114-ebe8-444a-aaaf-517260085546" containerName="init" Feb 02 13:17:41 crc kubenswrapper[4955]: I0202 13:17:41.698761 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc261114-ebe8-444a-aaaf-517260085546" containerName="init" Feb 02 13:17:41 crc kubenswrapper[4955]: I0202 13:17:41.698928 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc261114-ebe8-444a-aaaf-517260085546" containerName="dnsmasq-dns" Feb 02 13:17:41 crc kubenswrapper[4955]: I0202 13:17:41.699583 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v8fd4" Feb 02 13:17:41 crc kubenswrapper[4955]: I0202 13:17:41.703335 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 13:17:41 crc kubenswrapper[4955]: I0202 13:17:41.731124 4955 generic.go:334] "Generic (PLEG): container finished" podID="2486e70c-d87b-4e5c-bb3b-19d55ebbf622" containerID="156cc03f6970af3fc82db1fcf3453e7d77217661c2b831ef64225214ce8e14ea" exitCode=0 Feb 02 13:17:41 crc kubenswrapper[4955]: I0202 13:17:41.737649 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4e0e-account-create-update-qks52" event={"ID":"6e7279b9-2901-426c-a100-7390d81ae95b","Type":"ContainerStarted","Data":"2c433a9e6e10c3f8a655d1ddefa7490235c12dfecec5721d15bcabcd10f4af87"} Feb 02 13:17:41 crc kubenswrapper[4955]: I0202 13:17:41.737697 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-v8fd4"] Feb 02 13:17:41 crc kubenswrapper[4955]: I0202 13:17:41.750793 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-fw7r5" event={"ID":"2486e70c-d87b-4e5c-bb3b-19d55ebbf622","Type":"ContainerDied","Data":"156cc03f6970af3fc82db1fcf3453e7d77217661c2b831ef64225214ce8e14ea"} Feb 02 13:17:41 crc kubenswrapper[4955]: I0202 13:17:41.775855 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71a4cd94-2c54-4ce9-a851-a7a107b19451-operator-scripts\") pod \"root-account-create-update-v8fd4\" (UID: \"71a4cd94-2c54-4ce9-a851-a7a107b19451\") " pod="openstack/root-account-create-update-v8fd4" Feb 02 13:17:41 crc kubenswrapper[4955]: I0202 13:17:41.775996 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6kh2\" (UniqueName: \"kubernetes.io/projected/71a4cd94-2c54-4ce9-a851-a7a107b19451-kube-api-access-b6kh2\") pod \"root-account-create-update-v8fd4\" (UID: \"71a4cd94-2c54-4ce9-a851-a7a107b19451\") " pod="openstack/root-account-create-update-v8fd4" Feb 02 13:17:41 crc kubenswrapper[4955]: I0202 13:17:41.877053 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71a4cd94-2c54-4ce9-a851-a7a107b19451-operator-scripts\") pod \"root-account-create-update-v8fd4\" (UID: \"71a4cd94-2c54-4ce9-a851-a7a107b19451\") " pod="openstack/root-account-create-update-v8fd4" Feb 02 13:17:41 crc kubenswrapper[4955]: I0202 13:17:41.877143 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6kh2\" (UniqueName: \"kubernetes.io/projected/71a4cd94-2c54-4ce9-a851-a7a107b19451-kube-api-access-b6kh2\") pod \"root-account-create-update-v8fd4\" (UID: \"71a4cd94-2c54-4ce9-a851-a7a107b19451\") " pod="openstack/root-account-create-update-v8fd4" Feb 02 13:17:41 crc kubenswrapper[4955]: I0202 13:17:41.878238 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71a4cd94-2c54-4ce9-a851-a7a107b19451-operator-scripts\") pod \"root-account-create-update-v8fd4\" (UID: \"71a4cd94-2c54-4ce9-a851-a7a107b19451\") " pod="openstack/root-account-create-update-v8fd4" Feb 02 13:17:41 crc kubenswrapper[4955]: I0202 13:17:41.895383 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6kh2\" (UniqueName: \"kubernetes.io/projected/71a4cd94-2c54-4ce9-a851-a7a107b19451-kube-api-access-b6kh2\") pod \"root-account-create-update-v8fd4\" (UID: \"71a4cd94-2c54-4ce9-a851-a7a107b19451\") " pod="openstack/root-account-create-update-v8fd4" Feb 02 13:17:42 crc kubenswrapper[4955]: I0202 13:17:42.081696 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v8fd4" Feb 02 13:17:42 crc kubenswrapper[4955]: I0202 13:17:42.584909 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-v8fd4"] Feb 02 13:17:42 crc kubenswrapper[4955]: I0202 13:17:42.739015 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bx69p" event={"ID":"abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52","Type":"ContainerStarted","Data":"8a5fa4fdc43a62d9943ecb4f2d9a3e10b41f8492daa67c8ae6c7952a0ef3d9e7"} Feb 02 13:17:42 crc kubenswrapper[4955]: I0202 13:17:42.742279 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" event={"ID":"020a486c-6f94-48f2-a093-1aec0829a80b","Type":"ContainerStarted","Data":"63bc9b4051713197b2c380ba77cb2bec385370d6f0e38d67b72fba5f4ff34fce"} Feb 02 13:17:42 crc kubenswrapper[4955]: I0202 13:17:42.742713 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" Feb 02 13:17:42 crc kubenswrapper[4955]: I0202 13:17:42.749891 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9mhp4" event={"ID":"b3b79866-94e2-413c-b444-4af683c5095e","Type":"ContainerStarted","Data":"bb0ef8ef062df0f5a85db38636519498c5989f2758195b2e688e00232107c930"} Feb 02 13:17:42 crc kubenswrapper[4955]: I0202 13:17:42.756074 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3703-account-create-update-7pxx8" event={"ID":"416b4256-31d6-4455-a31e-62c0a1f3d5fe","Type":"ContainerStarted","Data":"665023b766e465377a8aa0f03e53b775038a8a3a7c04bbd47a953b3aa0ce77e6"} Feb 02 13:17:42 crc kubenswrapper[4955]: I0202 13:17:42.756920 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-bx69p" podStartSLOduration=8.756906433 podStartE2EDuration="8.756906433s" podCreationTimestamp="2026-02-02 13:17:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:17:42.755164431 +0000 UTC m=+913.667500871" watchObservedRunningTime="2026-02-02 13:17:42.756906433 +0000 UTC m=+913.669242883" Feb 02 13:17:42 crc kubenswrapper[4955]: I0202 13:17:42.758086 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vrgtc" event={"ID":"d4222fa7-e71f-4d91-9e0d-ef369046f6a0","Type":"ContainerStarted","Data":"eb90ddc4fc2e6e90c985a5e44d4818943bce8a1c90af823c2bac44597a51efca"} Feb 02 13:17:42 crc kubenswrapper[4955]: I0202 13:17:42.759733 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-zvbpv" event={"ID":"6a847a81-83ab-4560-924b-9051e0322672","Type":"ContainerStarted","Data":"2ade99d92b6c12afa90162eb1092febd221a2f7d7280270d9b1617ae1fa336d8"} Feb 02 13:17:42 crc kubenswrapper[4955]: I0202 13:17:42.761680 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ef11-account-create-update-dvm84" event={"ID":"c4f73bc6-3700-4cbe-9e5a-7a95c596c039","Type":"ContainerStarted","Data":"12fccdd4d92d04e566dbf725ccb7473ff51377fc23b10abc40c5d5ac6a18e536"} Feb 02 13:17:42 crc kubenswrapper[4955]: I0202 13:17:42.772873 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-9mhp4" podStartSLOduration=8.772853678 podStartE2EDuration="8.772853678s" podCreationTimestamp="2026-02-02 13:17:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:17:42.768463882 +0000 UTC m=+913.680800332" watchObservedRunningTime="2026-02-02 13:17:42.772853678 +0000 UTC m=+913.685190128" Feb 02 13:17:42 crc kubenswrapper[4955]: I0202 13:17:42.794966 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" podStartSLOduration=10.794949842 podStartE2EDuration="10.794949842s" podCreationTimestamp="2026-02-02 13:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:17:42.785988065 +0000 UTC m=+913.698324515" watchObservedRunningTime="2026-02-02 13:17:42.794949842 +0000 UTC m=+913.707286292" Feb 02 13:17:42 crc kubenswrapper[4955]: I0202 13:17:42.809293 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-vrgtc" podStartSLOduration=8.809273868 podStartE2EDuration="8.809273868s" podCreationTimestamp="2026-02-02 13:17:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:17:42.802047383 +0000 UTC m=+913.714383833" watchObservedRunningTime="2026-02-02 13:17:42.809273868 +0000 UTC m=+913.721610318" Feb 02 13:17:42 crc kubenswrapper[4955]: I0202 13:17:42.823243 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-3703-account-create-update-7pxx8" podStartSLOduration=8.823225175 podStartE2EDuration="8.823225175s" podCreationTimestamp="2026-02-02 13:17:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:17:42.813245694 +0000 UTC m=+913.725582164" watchObservedRunningTime="2026-02-02 13:17:42.823225175 +0000 UTC m=+913.735561625" Feb 02 13:17:42 crc kubenswrapper[4955]: I0202 13:17:42.835645 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-4e0e-account-create-update-qks52" podStartSLOduration=8.835630575 podStartE2EDuration="8.835630575s" podCreationTimestamp="2026-02-02 13:17:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:17:42.828659796 +0000 UTC m=+913.740996246" watchObservedRunningTime="2026-02-02 13:17:42.835630575 +0000 UTC m=+913.747967025" Feb 02 13:17:42 crc kubenswrapper[4955]: I0202 13:17:42.844420 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-ef11-account-create-update-dvm84" podStartSLOduration=7.844406316 podStartE2EDuration="7.844406316s" podCreationTimestamp="2026-02-02 13:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:17:42.840392859 +0000 UTC m=+913.752729309" watchObservedRunningTime="2026-02-02 13:17:42.844406316 +0000 UTC m=+913.756742766" Feb 02 13:17:42 crc kubenswrapper[4955]: I0202 13:17:42.874110 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-zvbpv" podStartSLOduration=10.874085763 podStartE2EDuration="10.874085763s" podCreationTimestamp="2026-02-02 13:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:17:42.871104201 +0000 UTC m=+913.783440671" watchObservedRunningTime="2026-02-02 13:17:42.874085763 +0000 UTC m=+913.786422213" Feb 02 13:17:43 crc kubenswrapper[4955]: I0202 13:17:43.768768 4955 generic.go:334] "Generic (PLEG): container finished" podID="abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52" containerID="8a5fa4fdc43a62d9943ecb4f2d9a3e10b41f8492daa67c8ae6c7952a0ef3d9e7" exitCode=0 Feb 02 13:17:43 crc kubenswrapper[4955]: I0202 13:17:43.768838 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bx69p" event={"ID":"abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52","Type":"ContainerDied","Data":"8a5fa4fdc43a62d9943ecb4f2d9a3e10b41f8492daa67c8ae6c7952a0ef3d9e7"} Feb 02 13:17:43 crc kubenswrapper[4955]: I0202 13:17:43.774622 4955 generic.go:334] "Generic (PLEG): container finished" podID="b3b79866-94e2-413c-b444-4af683c5095e" containerID="bb0ef8ef062df0f5a85db38636519498c5989f2758195b2e688e00232107c930" exitCode=0 Feb 02 13:17:43 crc kubenswrapper[4955]: I0202 13:17:43.775016 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9mhp4" event={"ID":"b3b79866-94e2-413c-b444-4af683c5095e","Type":"ContainerDied","Data":"bb0ef8ef062df0f5a85db38636519498c5989f2758195b2e688e00232107c930"} Feb 02 13:17:43 crc kubenswrapper[4955]: I0202 13:17:43.777713 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v8fd4" event={"ID":"71a4cd94-2c54-4ce9-a851-a7a107b19451","Type":"ContainerStarted","Data":"c99821835bd1f1bf8e7a38ae00a724491a5ebcca2938bf618208eae6134fcd1c"} Feb 02 13:17:43 crc kubenswrapper[4955]: I0202 13:17:43.791435 4955 generic.go:334] "Generic (PLEG): container finished" podID="416b4256-31d6-4455-a31e-62c0a1f3d5fe" containerID="665023b766e465377a8aa0f03e53b775038a8a3a7c04bbd47a953b3aa0ce77e6" exitCode=0 Feb 02 13:17:43 crc kubenswrapper[4955]: I0202 13:17:43.791477 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3703-account-create-update-7pxx8" event={"ID":"416b4256-31d6-4455-a31e-62c0a1f3d5fe","Type":"ContainerDied","Data":"665023b766e465377a8aa0f03e53b775038a8a3a7c04bbd47a953b3aa0ce77e6"} Feb 02 13:17:43 crc kubenswrapper[4955]: I0202 13:17:43.795910 4955 generic.go:334] "Generic (PLEG): container finished" podID="d4222fa7-e71f-4d91-9e0d-ef369046f6a0" containerID="eb90ddc4fc2e6e90c985a5e44d4818943bce8a1c90af823c2bac44597a51efca" exitCode=0 Feb 02 13:17:43 crc kubenswrapper[4955]: I0202 13:17:43.796077 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vrgtc" event={"ID":"d4222fa7-e71f-4d91-9e0d-ef369046f6a0","Type":"ContainerDied","Data":"eb90ddc4fc2e6e90c985a5e44d4818943bce8a1c90af823c2bac44597a51efca"} Feb 02 13:17:43 crc kubenswrapper[4955]: I0202 13:17:43.800660 4955 generic.go:334] "Generic (PLEG): container finished" podID="c4f73bc6-3700-4cbe-9e5a-7a95c596c039" containerID="12fccdd4d92d04e566dbf725ccb7473ff51377fc23b10abc40c5d5ac6a18e536" exitCode=0 Feb 02 13:17:43 crc kubenswrapper[4955]: I0202 13:17:43.800707 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ef11-account-create-update-dvm84" event={"ID":"c4f73bc6-3700-4cbe-9e5a-7a95c596c039","Type":"ContainerDied","Data":"12fccdd4d92d04e566dbf725ccb7473ff51377fc23b10abc40c5d5ac6a18e536"} Feb 02 13:17:43 crc kubenswrapper[4955]: I0202 13:17:43.805436 4955 generic.go:334] "Generic (PLEG): container finished" podID="6e7279b9-2901-426c-a100-7390d81ae95b" containerID="2c433a9e6e10c3f8a655d1ddefa7490235c12dfecec5721d15bcabcd10f4af87" exitCode=0 Feb 02 13:17:43 crc kubenswrapper[4955]: I0202 13:17:43.806390 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4e0e-account-create-update-qks52" event={"ID":"6e7279b9-2901-426c-a100-7390d81ae95b","Type":"ContainerDied","Data":"2c433a9e6e10c3f8a655d1ddefa7490235c12dfecec5721d15bcabcd10f4af87"} Feb 02 13:17:44 crc kubenswrapper[4955]: I0202 13:17:44.017950 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-etc-swift\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:17:44 crc kubenswrapper[4955]: E0202 13:17:44.018150 4955 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 13:17:44 crc kubenswrapper[4955]: E0202 13:17:44.018180 4955 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 13:17:44 crc kubenswrapper[4955]: E0202 13:17:44.018250 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-etc-swift podName:ec1a6503-248d-4f72-a3ab-e23df2ca163d nodeName:}" failed. No retries permitted until 2026-02-02 13:18:00.018227746 +0000 UTC m=+930.930564196 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-etc-swift") pod "swift-storage-0" (UID: "ec1a6503-248d-4f72-a3ab-e23df2ca163d") : configmap "swift-ring-files" not found Feb 02 13:17:44 crc kubenswrapper[4955]: I0202 13:17:44.819753 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6cbbc048-2495-466f-9649-8e95698e29d8","Type":"ContainerStarted","Data":"e4c1b85ca3d107b4a04e8c94124cde45efdf626da705dad71d02aaea7d187e56"} Feb 02 13:17:44 crc kubenswrapper[4955]: I0202 13:17:44.820188 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 02 13:17:44 crc kubenswrapper[4955]: I0202 13:17:44.820204 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6cbbc048-2495-466f-9649-8e95698e29d8","Type":"ContainerStarted","Data":"3f832a9767276af6a66a9f936d5b7f0cd99237478f5dfc820514054062ab2224"} Feb 02 13:17:44 crc kubenswrapper[4955]: I0202 13:17:44.830261 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-fw7r5" event={"ID":"2486e70c-d87b-4e5c-bb3b-19d55ebbf622","Type":"ContainerStarted","Data":"fc6ef9128cdac0aee35acb9360fc01f76aec0f9db5d779397b17097938364ea2"} Feb 02 13:17:44 crc kubenswrapper[4955]: I0202 13:17:44.831269 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-fw7r5" Feb 02 13:17:44 crc kubenswrapper[4955]: I0202 13:17:44.849357 4955 generic.go:334] "Generic (PLEG): container finished" podID="71a4cd94-2c54-4ce9-a851-a7a107b19451" containerID="273750ce0f2b9342b0e12266a9717372ac4156f05b29bbf3c4dcc1d00e8f0a64" exitCode=0 Feb 02 13:17:44 crc kubenswrapper[4955]: I0202 13:17:44.849445 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v8fd4" event={"ID":"71a4cd94-2c54-4ce9-a851-a7a107b19451","Type":"ContainerDied","Data":"273750ce0f2b9342b0e12266a9717372ac4156f05b29bbf3c4dcc1d00e8f0a64"} Feb 02 13:17:44 crc kubenswrapper[4955]: I0202 13:17:44.854212 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2g8w" event={"ID":"735a40d7-b7b4-461b-b99f-6557672748e7","Type":"ContainerStarted","Data":"04d4792136abce9307afa6296c2f76e3202ed0ccb468d6362454cdd2d7daeb24"} Feb 02 13:17:44 crc kubenswrapper[4955]: I0202 13:17:44.894232 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=4.056928336 podStartE2EDuration="9.894208791s" podCreationTimestamp="2026-02-02 13:17:35 +0000 UTC" firstStartedPulling="2026-02-02 13:17:37.391051123 +0000 UTC m=+908.303387563" lastFinishedPulling="2026-02-02 13:17:43.228331568 +0000 UTC m=+914.140668018" observedRunningTime="2026-02-02 13:17:44.881144065 +0000 UTC m=+915.793480515" watchObservedRunningTime="2026-02-02 13:17:44.894208791 +0000 UTC m=+915.806545241" Feb 02 13:17:44 crc kubenswrapper[4955]: I0202 13:17:44.928534 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n2g8w" podStartSLOduration=6.895940174 podStartE2EDuration="17.92851389s" podCreationTimestamp="2026-02-02 13:17:27 +0000 UTC" firstStartedPulling="2026-02-02 13:17:31.463773735 +0000 UTC m=+902.376110195" lastFinishedPulling="2026-02-02 13:17:42.496347461 +0000 UTC m=+913.408683911" observedRunningTime="2026-02-02 13:17:44.912057552 +0000 UTC m=+915.824394012" watchObservedRunningTime="2026-02-02 13:17:44.92851389 +0000 UTC m=+915.840850340" Feb 02 13:17:44 crc kubenswrapper[4955]: I0202 13:17:44.960151 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-fw7r5" podStartSLOduration=11.960132403 podStartE2EDuration="11.960132403s" podCreationTimestamp="2026-02-02 13:17:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:17:44.958274188 +0000 UTC m=+915.870610648" watchObservedRunningTime="2026-02-02 13:17:44.960132403 +0000 UTC m=+915.872468843" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.416058 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4e0e-account-create-update-qks52" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.551093 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl4z6\" (UniqueName: \"kubernetes.io/projected/6e7279b9-2901-426c-a100-7390d81ae95b-kube-api-access-fl4z6\") pod \"6e7279b9-2901-426c-a100-7390d81ae95b\" (UID: \"6e7279b9-2901-426c-a100-7390d81ae95b\") " Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.551170 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e7279b9-2901-426c-a100-7390d81ae95b-operator-scripts\") pod \"6e7279b9-2901-426c-a100-7390d81ae95b\" (UID: \"6e7279b9-2901-426c-a100-7390d81ae95b\") " Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.552491 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e7279b9-2901-426c-a100-7390d81ae95b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6e7279b9-2901-426c-a100-7390d81ae95b" (UID: "6e7279b9-2901-426c-a100-7390d81ae95b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.567426 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e7279b9-2901-426c-a100-7390d81ae95b-kube-api-access-fl4z6" (OuterVolumeSpecName: "kube-api-access-fl4z6") pod "6e7279b9-2901-426c-a100-7390d81ae95b" (UID: "6e7279b9-2901-426c-a100-7390d81ae95b"). InnerVolumeSpecName "kube-api-access-fl4z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.653081 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl4z6\" (UniqueName: \"kubernetes.io/projected/6e7279b9-2901-426c-a100-7390d81ae95b-kube-api-access-fl4z6\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.653115 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e7279b9-2901-426c-a100-7390d81ae95b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.653663 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3703-account-create-update-7pxx8" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.661160 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ef11-account-create-update-dvm84" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.682851 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vrgtc" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.696206 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9mhp4" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.754677 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km5xc\" (UniqueName: \"kubernetes.io/projected/c4f73bc6-3700-4cbe-9e5a-7a95c596c039-kube-api-access-km5xc\") pod \"c4f73bc6-3700-4cbe-9e5a-7a95c596c039\" (UID: \"c4f73bc6-3700-4cbe-9e5a-7a95c596c039\") " Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.754750 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/416b4256-31d6-4455-a31e-62c0a1f3d5fe-operator-scripts\") pod \"416b4256-31d6-4455-a31e-62c0a1f3d5fe\" (UID: \"416b4256-31d6-4455-a31e-62c0a1f3d5fe\") " Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.754797 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f73bc6-3700-4cbe-9e5a-7a95c596c039-operator-scripts\") pod \"c4f73bc6-3700-4cbe-9e5a-7a95c596c039\" (UID: \"c4f73bc6-3700-4cbe-9e5a-7a95c596c039\") " Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.754832 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4222fa7-e71f-4d91-9e0d-ef369046f6a0-operator-scripts\") pod \"d4222fa7-e71f-4d91-9e0d-ef369046f6a0\" (UID: \"d4222fa7-e71f-4d91-9e0d-ef369046f6a0\") " Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.754852 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hncrh\" (UniqueName: \"kubernetes.io/projected/b3b79866-94e2-413c-b444-4af683c5095e-kube-api-access-hncrh\") pod \"b3b79866-94e2-413c-b444-4af683c5095e\" (UID: \"b3b79866-94e2-413c-b444-4af683c5095e\") " Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.754917 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8whx\" (UniqueName: \"kubernetes.io/projected/416b4256-31d6-4455-a31e-62c0a1f3d5fe-kube-api-access-q8whx\") pod \"416b4256-31d6-4455-a31e-62c0a1f3d5fe\" (UID: \"416b4256-31d6-4455-a31e-62c0a1f3d5fe\") " Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.754944 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdn69\" (UniqueName: \"kubernetes.io/projected/d4222fa7-e71f-4d91-9e0d-ef369046f6a0-kube-api-access-cdn69\") pod \"d4222fa7-e71f-4d91-9e0d-ef369046f6a0\" (UID: \"d4222fa7-e71f-4d91-9e0d-ef369046f6a0\") " Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.754986 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3b79866-94e2-413c-b444-4af683c5095e-operator-scripts\") pod \"b3b79866-94e2-413c-b444-4af683c5095e\" (UID: \"b3b79866-94e2-413c-b444-4af683c5095e\") " Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.755821 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3b79866-94e2-413c-b444-4af683c5095e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3b79866-94e2-413c-b444-4af683c5095e" (UID: "b3b79866-94e2-413c-b444-4af683c5095e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.756118 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4222fa7-e71f-4d91-9e0d-ef369046f6a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4222fa7-e71f-4d91-9e0d-ef369046f6a0" (UID: "d4222fa7-e71f-4d91-9e0d-ef369046f6a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.756440 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/416b4256-31d6-4455-a31e-62c0a1f3d5fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "416b4256-31d6-4455-a31e-62c0a1f3d5fe" (UID: "416b4256-31d6-4455-a31e-62c0a1f3d5fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.756480 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4f73bc6-3700-4cbe-9e5a-7a95c596c039-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4f73bc6-3700-4cbe-9e5a-7a95c596c039" (UID: "c4f73bc6-3700-4cbe-9e5a-7a95c596c039"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.764886 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4222fa7-e71f-4d91-9e0d-ef369046f6a0-kube-api-access-cdn69" (OuterVolumeSpecName: "kube-api-access-cdn69") pod "d4222fa7-e71f-4d91-9e0d-ef369046f6a0" (UID: "d4222fa7-e71f-4d91-9e0d-ef369046f6a0"). InnerVolumeSpecName "kube-api-access-cdn69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.764942 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f73bc6-3700-4cbe-9e5a-7a95c596c039-kube-api-access-km5xc" (OuterVolumeSpecName: "kube-api-access-km5xc") pod "c4f73bc6-3700-4cbe-9e5a-7a95c596c039" (UID: "c4f73bc6-3700-4cbe-9e5a-7a95c596c039"). InnerVolumeSpecName "kube-api-access-km5xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.764986 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3b79866-94e2-413c-b444-4af683c5095e-kube-api-access-hncrh" (OuterVolumeSpecName: "kube-api-access-hncrh") pod "b3b79866-94e2-413c-b444-4af683c5095e" (UID: "b3b79866-94e2-413c-b444-4af683c5095e"). InnerVolumeSpecName "kube-api-access-hncrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.765180 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/416b4256-31d6-4455-a31e-62c0a1f3d5fe-kube-api-access-q8whx" (OuterVolumeSpecName: "kube-api-access-q8whx") pod "416b4256-31d6-4455-a31e-62c0a1f3d5fe" (UID: "416b4256-31d6-4455-a31e-62c0a1f3d5fe"). InnerVolumeSpecName "kube-api-access-q8whx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.812646 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bx69p" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.858111 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52-operator-scripts\") pod \"abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52\" (UID: \"abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52\") " Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.860899 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fngb\" (UniqueName: \"kubernetes.io/projected/abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52-kube-api-access-9fngb\") pod \"abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52\" (UID: \"abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52\") " Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.861329 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52" (UID: "abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.861913 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8whx\" (UniqueName: \"kubernetes.io/projected/416b4256-31d6-4455-a31e-62c0a1f3d5fe-kube-api-access-q8whx\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.861965 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdn69\" (UniqueName: \"kubernetes.io/projected/d4222fa7-e71f-4d91-9e0d-ef369046f6a0-kube-api-access-cdn69\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.861977 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3b79866-94e2-413c-b444-4af683c5095e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.861988 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km5xc\" (UniqueName: \"kubernetes.io/projected/c4f73bc6-3700-4cbe-9e5a-7a95c596c039-kube-api-access-km5xc\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.861997 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/416b4256-31d6-4455-a31e-62c0a1f3d5fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.862006 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.862016 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f73bc6-3700-4cbe-9e5a-7a95c596c039-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.862025 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hncrh\" (UniqueName: \"kubernetes.io/projected/b3b79866-94e2-413c-b444-4af683c5095e-kube-api-access-hncrh\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.862035 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4222fa7-e71f-4d91-9e0d-ef369046f6a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.863386 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bx69p" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.863639 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bx69p" event={"ID":"abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52","Type":"ContainerDied","Data":"e998704214464f539ef3d46afcc9e4f3d684a62e5d1fa4e06bbea5b68ba28504"} Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.863675 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e998704214464f539ef3d46afcc9e4f3d684a62e5d1fa4e06bbea5b68ba28504" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.865168 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9mhp4" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.865685 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9mhp4" event={"ID":"b3b79866-94e2-413c-b444-4af683c5095e","Type":"ContainerDied","Data":"d85a9e4708e21ae3a94de4ae998866493c5b5e1af3f8896b6c7457d97c6c9eb3"} Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.865716 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d85a9e4708e21ae3a94de4ae998866493c5b5e1af3f8896b6c7457d97c6c9eb3" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.868920 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52-kube-api-access-9fngb" (OuterVolumeSpecName: "kube-api-access-9fngb") pod "abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52" (UID: "abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52"). InnerVolumeSpecName "kube-api-access-9fngb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.877609 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3703-account-create-update-7pxx8" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.877948 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3703-account-create-update-7pxx8" event={"ID":"416b4256-31d6-4455-a31e-62c0a1f3d5fe","Type":"ContainerDied","Data":"78c52c9ee883f122cda755ce41ff5d448b9374f8f3a12c9f4e7ee0b874fb112e"} Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.878089 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78c52c9ee883f122cda755ce41ff5d448b9374f8f3a12c9f4e7ee0b874fb112e" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.889272 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vrgtc" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.889540 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vrgtc" event={"ID":"d4222fa7-e71f-4d91-9e0d-ef369046f6a0","Type":"ContainerDied","Data":"3c1f3af6f3695cbadf8e9e66d682ab438e70e6492e5694a1bdd482c3ab528695"} Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.889609 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c1f3af6f3695cbadf8e9e66d682ab438e70e6492e5694a1bdd482c3ab528695" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.906622 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ef11-account-create-update-dvm84" event={"ID":"c4f73bc6-3700-4cbe-9e5a-7a95c596c039","Type":"ContainerDied","Data":"36d2a9c07fd3b7f8e57321f3b9821fe1ea562cebaa9bd8709d1c1a6859df28b2"} Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.906666 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36d2a9c07fd3b7f8e57321f3b9821fe1ea562cebaa9bd8709d1c1a6859df28b2" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.906726 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ef11-account-create-update-dvm84" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.924104 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4e0e-account-create-update-qks52" event={"ID":"6e7279b9-2901-426c-a100-7390d81ae95b","Type":"ContainerDied","Data":"b12f8a8a03bc46c725b96ac23450aa1e40fbdf21a6126ce000cef1b6888abeac"} Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.924182 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b12f8a8a03bc46c725b96ac23450aa1e40fbdf21a6126ce000cef1b6888abeac" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.924149 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4e0e-account-create-update-qks52" Feb 02 13:17:45 crc kubenswrapper[4955]: I0202 13:17:45.963713 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fngb\" (UniqueName: \"kubernetes.io/projected/abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52-kube-api-access-9fngb\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:46 crc kubenswrapper[4955]: I0202 13:17:46.289929 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v8fd4" Feb 02 13:17:46 crc kubenswrapper[4955]: I0202 13:17:46.375345 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6kh2\" (UniqueName: \"kubernetes.io/projected/71a4cd94-2c54-4ce9-a851-a7a107b19451-kube-api-access-b6kh2\") pod \"71a4cd94-2c54-4ce9-a851-a7a107b19451\" (UID: \"71a4cd94-2c54-4ce9-a851-a7a107b19451\") " Feb 02 13:17:46 crc kubenswrapper[4955]: I0202 13:17:46.375470 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71a4cd94-2c54-4ce9-a851-a7a107b19451-operator-scripts\") pod \"71a4cd94-2c54-4ce9-a851-a7a107b19451\" (UID: \"71a4cd94-2c54-4ce9-a851-a7a107b19451\") " Feb 02 13:17:46 crc kubenswrapper[4955]: I0202 13:17:46.376273 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71a4cd94-2c54-4ce9-a851-a7a107b19451-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "71a4cd94-2c54-4ce9-a851-a7a107b19451" (UID: "71a4cd94-2c54-4ce9-a851-a7a107b19451"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:46 crc kubenswrapper[4955]: I0202 13:17:46.379404 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71a4cd94-2c54-4ce9-a851-a7a107b19451-kube-api-access-b6kh2" (OuterVolumeSpecName: "kube-api-access-b6kh2") pod "71a4cd94-2c54-4ce9-a851-a7a107b19451" (UID: "71a4cd94-2c54-4ce9-a851-a7a107b19451"). InnerVolumeSpecName "kube-api-access-b6kh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:17:46 crc kubenswrapper[4955]: I0202 13:17:46.477730 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71a4cd94-2c54-4ce9-a851-a7a107b19451-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:46 crc kubenswrapper[4955]: I0202 13:17:46.477765 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6kh2\" (UniqueName: \"kubernetes.io/projected/71a4cd94-2c54-4ce9-a851-a7a107b19451-kube-api-access-b6kh2\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:46 crc kubenswrapper[4955]: I0202 13:17:46.917004 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v8fd4" event={"ID":"71a4cd94-2c54-4ce9-a851-a7a107b19451","Type":"ContainerDied","Data":"c99821835bd1f1bf8e7a38ae00a724491a5ebcca2938bf618208eae6134fcd1c"} Feb 02 13:17:46 crc kubenswrapper[4955]: I0202 13:17:46.917039 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c99821835bd1f1bf8e7a38ae00a724491a5ebcca2938bf618208eae6134fcd1c" Feb 02 13:17:46 crc kubenswrapper[4955]: I0202 13:17:46.917088 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v8fd4" Feb 02 13:17:46 crc kubenswrapper[4955]: I0202 13:17:46.920097 4955 generic.go:334] "Generic (PLEG): container finished" podID="b3cb6dc2-d198-405c-816a-dd3ddb578ed4" containerID="3286c72e03c4d05065b4c4a534f756faec721dce38d570b66188531ade9f2ab2" exitCode=0 Feb 02 13:17:46 crc kubenswrapper[4955]: I0202 13:17:46.920124 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dv4r7" event={"ID":"b3cb6dc2-d198-405c-816a-dd3ddb578ed4","Type":"ContainerDied","Data":"3286c72e03c4d05065b4c4a534f756faec721dce38d570b66188531ade9f2ab2"} Feb 02 13:17:47 crc kubenswrapper[4955]: I0202 13:17:47.536087 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n2g8w" Feb 02 13:17:47 crc kubenswrapper[4955]: I0202 13:17:47.538117 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n2g8w" Feb 02 13:17:47 crc kubenswrapper[4955]: I0202 13:17:47.583887 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n2g8w" Feb 02 13:17:47 crc kubenswrapper[4955]: I0202 13:17:47.620961 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qkf4x" Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.090333 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-v8fd4"] Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.096121 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-v8fd4"] Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.190972 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.244391 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qkf4x"] Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.264344 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.303408 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-combined-ca-bundle\") pod \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.303447 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-swiftconf\") pod \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.303522 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-ring-data-devices\") pod \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.303598 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-etc-swift\") pod \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.303642 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn2hp\" (UniqueName: \"kubernetes.io/projected/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-kube-api-access-tn2hp\") pod \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.303668 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-scripts\") pod \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.303754 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-dispersionconf\") pod \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\" (UID: \"b3cb6dc2-d198-405c-816a-dd3ddb578ed4\") " Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.305283 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b3cb6dc2-d198-405c-816a-dd3ddb578ed4" (UID: "b3cb6dc2-d198-405c-816a-dd3ddb578ed4"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.305973 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b3cb6dc2-d198-405c-816a-dd3ddb578ed4" (UID: "b3cb6dc2-d198-405c-816a-dd3ddb578ed4"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.322735 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-kube-api-access-tn2hp" (OuterVolumeSpecName: "kube-api-access-tn2hp") pod "b3cb6dc2-d198-405c-816a-dd3ddb578ed4" (UID: "b3cb6dc2-d198-405c-816a-dd3ddb578ed4"). InnerVolumeSpecName "kube-api-access-tn2hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.324920 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b3cb6dc2-d198-405c-816a-dd3ddb578ed4" (UID: "b3cb6dc2-d198-405c-816a-dd3ddb578ed4"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.337632 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3cb6dc2-d198-405c-816a-dd3ddb578ed4" (UID: "b3cb6dc2-d198-405c-816a-dd3ddb578ed4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.342883 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b3cb6dc2-d198-405c-816a-dd3ddb578ed4" (UID: "b3cb6dc2-d198-405c-816a-dd3ddb578ed4"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.358746 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-scripts" (OuterVolumeSpecName: "scripts") pod "b3cb6dc2-d198-405c-816a-dd3ddb578ed4" (UID: "b3cb6dc2-d198-405c-816a-dd3ddb578ed4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.409503 4955 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.409542 4955 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.409552 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn2hp\" (UniqueName: \"kubernetes.io/projected/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-kube-api-access-tn2hp\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.409578 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.409587 4955 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.409595 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.409606 4955 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b3cb6dc2-d198-405c-816a-dd3ddb578ed4-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.477005 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7xm64"] Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.477278 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7xm64" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" containerName="registry-server" containerID="cri-o://5b35245b3edfd65e422cfa9c2619fedc4f894c465faa28b6d2c39b87d2992e81" gracePeriod=2 Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.662746 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-fw7r5" Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.746601 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-z8pdc"] Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.952973 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dv4r7" event={"ID":"b3cb6dc2-d198-405c-816a-dd3ddb578ed4","Type":"ContainerDied","Data":"27d519508c7767adcbfb014136d61326e2caf41875c42e3f2c067bbbf5da3d28"} Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.953005 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27d519508c7767adcbfb014136d61326e2caf41875c42e3f2c067bbbf5da3d28" Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.953090 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dv4r7" Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.964994 4955 generic.go:334] "Generic (PLEG): container finished" podID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" containerID="5b35245b3edfd65e422cfa9c2619fedc4f894c465faa28b6d2c39b87d2992e81" exitCode=0 Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.965463 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" podUID="020a486c-6f94-48f2-a093-1aec0829a80b" containerName="dnsmasq-dns" containerID="cri-o://63bc9b4051713197b2c380ba77cb2bec385370d6f0e38d67b72fba5f4ff34fce" gracePeriod=10 Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.965149 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xm64" event={"ID":"da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2","Type":"ContainerDied","Data":"5b35245b3edfd65e422cfa9c2619fedc4f894c465faa28b6d2c39b87d2992e81"} Feb 02 13:17:48 crc kubenswrapper[4955]: I0202 13:17:48.970989 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xm64" Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.029941 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2-catalog-content\") pod \"da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2\" (UID: \"da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2\") " Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.030038 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2-utilities\") pod \"da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2\" (UID: \"da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2\") " Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.030121 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68x4h\" (UniqueName: \"kubernetes.io/projected/da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2-kube-api-access-68x4h\") pod \"da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2\" (UID: \"da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2\") " Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.032264 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2-utilities" (OuterVolumeSpecName: "utilities") pod "da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" (UID: "da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.036722 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2-kube-api-access-68x4h" (OuterVolumeSpecName: "kube-api-access-68x4h") pod "da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" (UID: "da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2"). InnerVolumeSpecName "kube-api-access-68x4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.096883 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" (UID: "da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.131967 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68x4h\" (UniqueName: \"kubernetes.io/projected/da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2-kube-api-access-68x4h\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.132007 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.132019 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.617490 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.643389 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/020a486c-6f94-48f2-a093-1aec0829a80b-dns-svc\") pod \"020a486c-6f94-48f2-a093-1aec0829a80b\" (UID: \"020a486c-6f94-48f2-a093-1aec0829a80b\") " Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.643724 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/020a486c-6f94-48f2-a093-1aec0829a80b-config\") pod \"020a486c-6f94-48f2-a093-1aec0829a80b\" (UID: \"020a486c-6f94-48f2-a093-1aec0829a80b\") " Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.643869 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvvwt\" (UniqueName: \"kubernetes.io/projected/020a486c-6f94-48f2-a093-1aec0829a80b-kube-api-access-xvvwt\") pod \"020a486c-6f94-48f2-a093-1aec0829a80b\" (UID: \"020a486c-6f94-48f2-a093-1aec0829a80b\") " Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.644035 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/020a486c-6f94-48f2-a093-1aec0829a80b-ovsdbserver-nb\") pod \"020a486c-6f94-48f2-a093-1aec0829a80b\" (UID: \"020a486c-6f94-48f2-a093-1aec0829a80b\") " Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.751545 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71a4cd94-2c54-4ce9-a851-a7a107b19451" path="/var/lib/kubelet/pods/71a4cd94-2c54-4ce9-a851-a7a107b19451/volumes" Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.763764 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/020a486c-6f94-48f2-a093-1aec0829a80b-kube-api-access-xvvwt" (OuterVolumeSpecName: "kube-api-access-xvvwt") pod "020a486c-6f94-48f2-a093-1aec0829a80b" (UID: "020a486c-6f94-48f2-a093-1aec0829a80b"). InnerVolumeSpecName "kube-api-access-xvvwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.772352 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/020a486c-6f94-48f2-a093-1aec0829a80b-config" (OuterVolumeSpecName: "config") pod "020a486c-6f94-48f2-a093-1aec0829a80b" (UID: "020a486c-6f94-48f2-a093-1aec0829a80b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.777068 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/020a486c-6f94-48f2-a093-1aec0829a80b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "020a486c-6f94-48f2-a093-1aec0829a80b" (UID: "020a486c-6f94-48f2-a093-1aec0829a80b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.791666 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/020a486c-6f94-48f2-a093-1aec0829a80b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "020a486c-6f94-48f2-a093-1aec0829a80b" (UID: "020a486c-6f94-48f2-a093-1aec0829a80b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.848515 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/020a486c-6f94-48f2-a093-1aec0829a80b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.848851 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/020a486c-6f94-48f2-a093-1aec0829a80b-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.848870 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvvwt\" (UniqueName: \"kubernetes.io/projected/020a486c-6f94-48f2-a093-1aec0829a80b-kube-api-access-xvvwt\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.848886 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/020a486c-6f94-48f2-a093-1aec0829a80b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.975061 4955 generic.go:334] "Generic (PLEG): container finished" podID="020a486c-6f94-48f2-a093-1aec0829a80b" containerID="63bc9b4051713197b2c380ba77cb2bec385370d6f0e38d67b72fba5f4ff34fce" exitCode=0 Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.975126 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.975130 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" event={"ID":"020a486c-6f94-48f2-a093-1aec0829a80b","Type":"ContainerDied","Data":"63bc9b4051713197b2c380ba77cb2bec385370d6f0e38d67b72fba5f4ff34fce"} Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.975243 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-z8pdc" event={"ID":"020a486c-6f94-48f2-a093-1aec0829a80b","Type":"ContainerDied","Data":"298f8e5504e953fddd6c240c688f733dfc62fa0a0697366045ccd2c5bb123150"} Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.975271 4955 scope.go:117] "RemoveContainer" containerID="63bc9b4051713197b2c380ba77cb2bec385370d6f0e38d67b72fba5f4ff34fce" Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.978070 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xm64" event={"ID":"da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2","Type":"ContainerDied","Data":"00ed981583e6035fb0b75ec7866ceb87f45afb10c6d12bb2586ba54236dc40a0"} Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.978108 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xm64" Feb 02 13:17:49 crc kubenswrapper[4955]: I0202 13:17:49.996623 4955 scope.go:117] "RemoveContainer" containerID="d4ce0fc5642a343c893c56cb75eb87477241200afd2bdf8ba50994ce832f5d2a" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.006490 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7xm64"] Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.016674 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7xm64"] Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.035509 4955 scope.go:117] "RemoveContainer" containerID="63bc9b4051713197b2c380ba77cb2bec385370d6f0e38d67b72fba5f4ff34fce" Feb 02 13:17:50 crc kubenswrapper[4955]: E0202 13:17:50.036045 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63bc9b4051713197b2c380ba77cb2bec385370d6f0e38d67b72fba5f4ff34fce\": container with ID starting with 63bc9b4051713197b2c380ba77cb2bec385370d6f0e38d67b72fba5f4ff34fce not found: ID does not exist" containerID="63bc9b4051713197b2c380ba77cb2bec385370d6f0e38d67b72fba5f4ff34fce" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.036079 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63bc9b4051713197b2c380ba77cb2bec385370d6f0e38d67b72fba5f4ff34fce"} err="failed to get container status \"63bc9b4051713197b2c380ba77cb2bec385370d6f0e38d67b72fba5f4ff34fce\": rpc error: code = NotFound desc = could not find container \"63bc9b4051713197b2c380ba77cb2bec385370d6f0e38d67b72fba5f4ff34fce\": container with ID starting with 63bc9b4051713197b2c380ba77cb2bec385370d6f0e38d67b72fba5f4ff34fce not found: ID does not exist" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.036103 4955 scope.go:117] "RemoveContainer" containerID="d4ce0fc5642a343c893c56cb75eb87477241200afd2bdf8ba50994ce832f5d2a" Feb 02 13:17:50 crc kubenswrapper[4955]: E0202 13:17:50.036414 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4ce0fc5642a343c893c56cb75eb87477241200afd2bdf8ba50994ce832f5d2a\": container with ID starting with d4ce0fc5642a343c893c56cb75eb87477241200afd2bdf8ba50994ce832f5d2a not found: ID does not exist" containerID="d4ce0fc5642a343c893c56cb75eb87477241200afd2bdf8ba50994ce832f5d2a" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.036441 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4ce0fc5642a343c893c56cb75eb87477241200afd2bdf8ba50994ce832f5d2a"} err="failed to get container status \"d4ce0fc5642a343c893c56cb75eb87477241200afd2bdf8ba50994ce832f5d2a\": rpc error: code = NotFound desc = could not find container \"d4ce0fc5642a343c893c56cb75eb87477241200afd2bdf8ba50994ce832f5d2a\": container with ID starting with d4ce0fc5642a343c893c56cb75eb87477241200afd2bdf8ba50994ce832f5d2a not found: ID does not exist" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.036458 4955 scope.go:117] "RemoveContainer" containerID="5b35245b3edfd65e422cfa9c2619fedc4f894c465faa28b6d2c39b87d2992e81" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.037439 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-z8pdc"] Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.044052 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-z8pdc"] Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.053600 4955 scope.go:117] "RemoveContainer" containerID="33b72168bb86635c109623d78ab335c434f836065ed2aec3c2e5623a3c6aef7f" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.074405 4955 scope.go:117] "RemoveContainer" containerID="3dcc4c6d7157dafa47b5209f4f0c5557a8a43669a54d22492d5fefd3bd73b348" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449137 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-4r26c"] Feb 02 13:17:50 crc kubenswrapper[4955]: E0202 13:17:50.449516 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7279b9-2901-426c-a100-7390d81ae95b" containerName="mariadb-account-create-update" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449529 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7279b9-2901-426c-a100-7390d81ae95b" containerName="mariadb-account-create-update" Feb 02 13:17:50 crc kubenswrapper[4955]: E0202 13:17:50.449545 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3cb6dc2-d198-405c-816a-dd3ddb578ed4" containerName="swift-ring-rebalance" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449551 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3cb6dc2-d198-405c-816a-dd3ddb578ed4" containerName="swift-ring-rebalance" Feb 02 13:17:50 crc kubenswrapper[4955]: E0202 13:17:50.449576 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416b4256-31d6-4455-a31e-62c0a1f3d5fe" containerName="mariadb-account-create-update" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449582 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="416b4256-31d6-4455-a31e-62c0a1f3d5fe" containerName="mariadb-account-create-update" Feb 02 13:17:50 crc kubenswrapper[4955]: E0202 13:17:50.449593 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4222fa7-e71f-4d91-9e0d-ef369046f6a0" containerName="mariadb-database-create" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449598 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4222fa7-e71f-4d91-9e0d-ef369046f6a0" containerName="mariadb-database-create" Feb 02 13:17:50 crc kubenswrapper[4955]: E0202 13:17:50.449612 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b79866-94e2-413c-b444-4af683c5095e" containerName="mariadb-database-create" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449617 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b79866-94e2-413c-b444-4af683c5095e" containerName="mariadb-database-create" Feb 02 13:17:50 crc kubenswrapper[4955]: E0202 13:17:50.449625 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52" containerName="mariadb-database-create" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449632 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52" containerName="mariadb-database-create" Feb 02 13:17:50 crc kubenswrapper[4955]: E0202 13:17:50.449641 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" containerName="registry-server" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449649 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" containerName="registry-server" Feb 02 13:17:50 crc kubenswrapper[4955]: E0202 13:17:50.449656 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="020a486c-6f94-48f2-a093-1aec0829a80b" containerName="dnsmasq-dns" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449661 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="020a486c-6f94-48f2-a093-1aec0829a80b" containerName="dnsmasq-dns" Feb 02 13:17:50 crc kubenswrapper[4955]: E0202 13:17:50.449672 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f73bc6-3700-4cbe-9e5a-7a95c596c039" containerName="mariadb-account-create-update" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449677 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f73bc6-3700-4cbe-9e5a-7a95c596c039" containerName="mariadb-account-create-update" Feb 02 13:17:50 crc kubenswrapper[4955]: E0202 13:17:50.449685 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" containerName="extract-utilities" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449691 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" containerName="extract-utilities" Feb 02 13:17:50 crc kubenswrapper[4955]: E0202 13:17:50.449698 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" containerName="extract-content" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449704 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" containerName="extract-content" Feb 02 13:17:50 crc kubenswrapper[4955]: E0202 13:17:50.449712 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a4cd94-2c54-4ce9-a851-a7a107b19451" containerName="mariadb-account-create-update" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449717 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a4cd94-2c54-4ce9-a851-a7a107b19451" containerName="mariadb-account-create-update" Feb 02 13:17:50 crc kubenswrapper[4955]: E0202 13:17:50.449728 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="020a486c-6f94-48f2-a093-1aec0829a80b" containerName="init" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449734 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="020a486c-6f94-48f2-a093-1aec0829a80b" containerName="init" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449867 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3cb6dc2-d198-405c-816a-dd3ddb578ed4" containerName="swift-ring-rebalance" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449877 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52" containerName="mariadb-database-create" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449885 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" containerName="registry-server" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449892 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3b79866-94e2-413c-b444-4af683c5095e" containerName="mariadb-database-create" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449900 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f73bc6-3700-4cbe-9e5a-7a95c596c039" containerName="mariadb-account-create-update" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449911 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e7279b9-2901-426c-a100-7390d81ae95b" containerName="mariadb-account-create-update" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449920 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="020a486c-6f94-48f2-a093-1aec0829a80b" containerName="dnsmasq-dns" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449928 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="416b4256-31d6-4455-a31e-62c0a1f3d5fe" containerName="mariadb-account-create-update" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449935 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="71a4cd94-2c54-4ce9-a851-a7a107b19451" containerName="mariadb-account-create-update" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.449943 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4222fa7-e71f-4d91-9e0d-ef369046f6a0" containerName="mariadb-database-create" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.450416 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4r26c" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.454136 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.454342 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2fcpn" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.458480 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-db-sync-config-data\") pod \"glance-db-sync-4r26c\" (UID: \"d5111fc8-b31a-4644-aae9-5a89e4d5da9a\") " pod="openstack/glance-db-sync-4r26c" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.458782 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfg9x\" (UniqueName: \"kubernetes.io/projected/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-kube-api-access-cfg9x\") pod \"glance-db-sync-4r26c\" (UID: \"d5111fc8-b31a-4644-aae9-5a89e4d5da9a\") " pod="openstack/glance-db-sync-4r26c" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.458970 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-combined-ca-bundle\") pod \"glance-db-sync-4r26c\" (UID: \"d5111fc8-b31a-4644-aae9-5a89e4d5da9a\") " pod="openstack/glance-db-sync-4r26c" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.459358 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-config-data\") pod \"glance-db-sync-4r26c\" (UID: \"d5111fc8-b31a-4644-aae9-5a89e4d5da9a\") " pod="openstack/glance-db-sync-4r26c" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.463077 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-4r26c"] Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.561108 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-combined-ca-bundle\") pod \"glance-db-sync-4r26c\" (UID: \"d5111fc8-b31a-4644-aae9-5a89e4d5da9a\") " pod="openstack/glance-db-sync-4r26c" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.561351 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-config-data\") pod \"glance-db-sync-4r26c\" (UID: \"d5111fc8-b31a-4644-aae9-5a89e4d5da9a\") " pod="openstack/glance-db-sync-4r26c" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.561409 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-db-sync-config-data\") pod \"glance-db-sync-4r26c\" (UID: \"d5111fc8-b31a-4644-aae9-5a89e4d5da9a\") " pod="openstack/glance-db-sync-4r26c" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.561443 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfg9x\" (UniqueName: \"kubernetes.io/projected/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-kube-api-access-cfg9x\") pod \"glance-db-sync-4r26c\" (UID: \"d5111fc8-b31a-4644-aae9-5a89e4d5da9a\") " pod="openstack/glance-db-sync-4r26c" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.565946 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-config-data\") pod \"glance-db-sync-4r26c\" (UID: \"d5111fc8-b31a-4644-aae9-5a89e4d5da9a\") " pod="openstack/glance-db-sync-4r26c" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.565970 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-combined-ca-bundle\") pod \"glance-db-sync-4r26c\" (UID: \"d5111fc8-b31a-4644-aae9-5a89e4d5da9a\") " pod="openstack/glance-db-sync-4r26c" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.568093 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-db-sync-config-data\") pod \"glance-db-sync-4r26c\" (UID: \"d5111fc8-b31a-4644-aae9-5a89e4d5da9a\") " pod="openstack/glance-db-sync-4r26c" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.579700 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfg9x\" (UniqueName: \"kubernetes.io/projected/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-kube-api-access-cfg9x\") pod \"glance-db-sync-4r26c\" (UID: \"d5111fc8-b31a-4644-aae9-5a89e4d5da9a\") " pod="openstack/glance-db-sync-4r26c" Feb 02 13:17:50 crc kubenswrapper[4955]: I0202 13:17:50.769868 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4r26c" Feb 02 13:17:51 crc kubenswrapper[4955]: I0202 13:17:51.350353 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-4r26c"] Feb 02 13:17:51 crc kubenswrapper[4955]: W0202 13:17:51.362698 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5111fc8_b31a_4644_aae9_5a89e4d5da9a.slice/crio-f78f46c7388410a7f2f58a2518cc819516e8d71a4875a086f6a90cf12a011266 WatchSource:0}: Error finding container f78f46c7388410a7f2f58a2518cc819516e8d71a4875a086f6a90cf12a011266: Status 404 returned error can't find the container with id f78f46c7388410a7f2f58a2518cc819516e8d71a4875a086f6a90cf12a011266 Feb 02 13:17:51 crc kubenswrapper[4955]: I0202 13:17:51.726066 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="020a486c-6f94-48f2-a093-1aec0829a80b" path="/var/lib/kubelet/pods/020a486c-6f94-48f2-a093-1aec0829a80b/volumes" Feb 02 13:17:51 crc kubenswrapper[4955]: I0202 13:17:51.726917 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2" path="/var/lib/kubelet/pods/da5b1d16-e075-4abc-8d7e-9f0f08c6b4b2/volumes" Feb 02 13:17:51 crc kubenswrapper[4955]: I0202 13:17:51.727699 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-wl9t7"] Feb 02 13:17:51 crc kubenswrapper[4955]: I0202 13:17:51.729332 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wl9t7" Feb 02 13:17:51 crc kubenswrapper[4955]: I0202 13:17:51.731483 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 13:17:51 crc kubenswrapper[4955]: I0202 13:17:51.735361 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wl9t7"] Feb 02 13:17:51 crc kubenswrapper[4955]: I0202 13:17:51.783644 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvshf\" (UniqueName: \"kubernetes.io/projected/320e0fb7-ed3a-4650-b474-62da91b401ee-kube-api-access-pvshf\") pod \"root-account-create-update-wl9t7\" (UID: \"320e0fb7-ed3a-4650-b474-62da91b401ee\") " pod="openstack/root-account-create-update-wl9t7" Feb 02 13:17:51 crc kubenswrapper[4955]: I0202 13:17:51.783780 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/320e0fb7-ed3a-4650-b474-62da91b401ee-operator-scripts\") pod \"root-account-create-update-wl9t7\" (UID: \"320e0fb7-ed3a-4650-b474-62da91b401ee\") " pod="openstack/root-account-create-update-wl9t7" Feb 02 13:17:51 crc kubenswrapper[4955]: I0202 13:17:51.885413 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvshf\" (UniqueName: \"kubernetes.io/projected/320e0fb7-ed3a-4650-b474-62da91b401ee-kube-api-access-pvshf\") pod \"root-account-create-update-wl9t7\" (UID: \"320e0fb7-ed3a-4650-b474-62da91b401ee\") " pod="openstack/root-account-create-update-wl9t7" Feb 02 13:17:51 crc kubenswrapper[4955]: I0202 13:17:51.885596 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/320e0fb7-ed3a-4650-b474-62da91b401ee-operator-scripts\") pod \"root-account-create-update-wl9t7\" (UID: \"320e0fb7-ed3a-4650-b474-62da91b401ee\") " pod="openstack/root-account-create-update-wl9t7" Feb 02 13:17:51 crc kubenswrapper[4955]: I0202 13:17:51.886492 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/320e0fb7-ed3a-4650-b474-62da91b401ee-operator-scripts\") pod \"root-account-create-update-wl9t7\" (UID: \"320e0fb7-ed3a-4650-b474-62da91b401ee\") " pod="openstack/root-account-create-update-wl9t7" Feb 02 13:17:51 crc kubenswrapper[4955]: I0202 13:17:51.905225 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvshf\" (UniqueName: \"kubernetes.io/projected/320e0fb7-ed3a-4650-b474-62da91b401ee-kube-api-access-pvshf\") pod \"root-account-create-update-wl9t7\" (UID: \"320e0fb7-ed3a-4650-b474-62da91b401ee\") " pod="openstack/root-account-create-update-wl9t7" Feb 02 13:17:52 crc kubenswrapper[4955]: I0202 13:17:52.004374 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4r26c" event={"ID":"d5111fc8-b31a-4644-aae9-5a89e4d5da9a","Type":"ContainerStarted","Data":"f78f46c7388410a7f2f58a2518cc819516e8d71a4875a086f6a90cf12a011266"} Feb 02 13:17:52 crc kubenswrapper[4955]: I0202 13:17:52.006723 4955 generic.go:334] "Generic (PLEG): container finished" podID="60f684bd-051c-4608-8c11-1058cd2d6a01" containerID="7321d199cd79f17689d66004060101e7046bc926120421085d6c17ae90cbfbd1" exitCode=0 Feb 02 13:17:52 crc kubenswrapper[4955]: I0202 13:17:52.006762 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"60f684bd-051c-4608-8c11-1058cd2d6a01","Type":"ContainerDied","Data":"7321d199cd79f17689d66004060101e7046bc926120421085d6c17ae90cbfbd1"} Feb 02 13:17:52 crc kubenswrapper[4955]: I0202 13:17:52.053412 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wl9t7" Feb 02 13:17:52 crc kubenswrapper[4955]: I0202 13:17:52.492499 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wl9t7"] Feb 02 13:17:52 crc kubenswrapper[4955]: W0202 13:17:52.499431 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod320e0fb7_ed3a_4650_b474_62da91b401ee.slice/crio-0060b34c2100d020f63d0c50baf11ccac8efee67d5c769029a0c5c0b410ee35c WatchSource:0}: Error finding container 0060b34c2100d020f63d0c50baf11ccac8efee67d5c769029a0c5c0b410ee35c: Status 404 returned error can't find the container with id 0060b34c2100d020f63d0c50baf11ccac8efee67d5c769029a0c5c0b410ee35c Feb 02 13:17:53 crc kubenswrapper[4955]: I0202 13:17:53.016467 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"60f684bd-051c-4608-8c11-1058cd2d6a01","Type":"ContainerStarted","Data":"50eb1031dbe225b15ca93c57f75d98168064e4aebf2f30baa5878661ad315c73"} Feb 02 13:17:53 crc kubenswrapper[4955]: I0202 13:17:53.016699 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 02 13:17:53 crc kubenswrapper[4955]: I0202 13:17:53.018160 4955 generic.go:334] "Generic (PLEG): container finished" podID="320e0fb7-ed3a-4650-b474-62da91b401ee" containerID="3c37174880b2aed37d6b3fa6e4a631044a5de1bb847112007526bd7b0e6412dd" exitCode=0 Feb 02 13:17:53 crc kubenswrapper[4955]: I0202 13:17:53.018196 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wl9t7" event={"ID":"320e0fb7-ed3a-4650-b474-62da91b401ee","Type":"ContainerDied","Data":"3c37174880b2aed37d6b3fa6e4a631044a5de1bb847112007526bd7b0e6412dd"} Feb 02 13:17:53 crc kubenswrapper[4955]: I0202 13:17:53.018232 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wl9t7" event={"ID":"320e0fb7-ed3a-4650-b474-62da91b401ee","Type":"ContainerStarted","Data":"0060b34c2100d020f63d0c50baf11ccac8efee67d5c769029a0c5c0b410ee35c"} Feb 02 13:17:53 crc kubenswrapper[4955]: I0202 13:17:53.044228 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.734503139 podStartE2EDuration="1m3.04421093s" podCreationTimestamp="2026-02-02 13:16:50 +0000 UTC" firstStartedPulling="2026-02-02 13:16:52.43343872 +0000 UTC m=+863.345775170" lastFinishedPulling="2026-02-02 13:17:18.743146511 +0000 UTC m=+889.655482961" observedRunningTime="2026-02-02 13:17:53.038269287 +0000 UTC m=+923.950605757" watchObservedRunningTime="2026-02-02 13:17:53.04421093 +0000 UTC m=+923.956547390" Feb 02 13:17:54 crc kubenswrapper[4955]: I0202 13:17:54.419798 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wl9t7" Feb 02 13:17:54 crc kubenswrapper[4955]: I0202 13:17:54.530023 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/320e0fb7-ed3a-4650-b474-62da91b401ee-operator-scripts\") pod \"320e0fb7-ed3a-4650-b474-62da91b401ee\" (UID: \"320e0fb7-ed3a-4650-b474-62da91b401ee\") " Feb 02 13:17:54 crc kubenswrapper[4955]: I0202 13:17:54.530074 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvshf\" (UniqueName: \"kubernetes.io/projected/320e0fb7-ed3a-4650-b474-62da91b401ee-kube-api-access-pvshf\") pod \"320e0fb7-ed3a-4650-b474-62da91b401ee\" (UID: \"320e0fb7-ed3a-4650-b474-62da91b401ee\") " Feb 02 13:17:54 crc kubenswrapper[4955]: I0202 13:17:54.530609 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/320e0fb7-ed3a-4650-b474-62da91b401ee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "320e0fb7-ed3a-4650-b474-62da91b401ee" (UID: "320e0fb7-ed3a-4650-b474-62da91b401ee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:54 crc kubenswrapper[4955]: I0202 13:17:54.535800 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/320e0fb7-ed3a-4650-b474-62da91b401ee-kube-api-access-pvshf" (OuterVolumeSpecName: "kube-api-access-pvshf") pod "320e0fb7-ed3a-4650-b474-62da91b401ee" (UID: "320e0fb7-ed3a-4650-b474-62da91b401ee"). InnerVolumeSpecName "kube-api-access-pvshf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:17:54 crc kubenswrapper[4955]: I0202 13:17:54.632110 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/320e0fb7-ed3a-4650-b474-62da91b401ee-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:54 crc kubenswrapper[4955]: I0202 13:17:54.632153 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvshf\" (UniqueName: \"kubernetes.io/projected/320e0fb7-ed3a-4650-b474-62da91b401ee-kube-api-access-pvshf\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:55 crc kubenswrapper[4955]: I0202 13:17:55.037414 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wl9t7" event={"ID":"320e0fb7-ed3a-4650-b474-62da91b401ee","Type":"ContainerDied","Data":"0060b34c2100d020f63d0c50baf11ccac8efee67d5c769029a0c5c0b410ee35c"} Feb 02 13:17:55 crc kubenswrapper[4955]: I0202 13:17:55.037726 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0060b34c2100d020f63d0c50baf11ccac8efee67d5c769029a0c5c0b410ee35c" Feb 02 13:17:55 crc kubenswrapper[4955]: I0202 13:17:55.037460 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wl9t7" Feb 02 13:17:55 crc kubenswrapper[4955]: I0202 13:17:55.107459 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-p7twj" podUID="480861f6-44ea-41c3-806e-497f3177eb91" containerName="ovn-controller" probeResult="failure" output=< Feb 02 13:17:55 crc kubenswrapper[4955]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 13:17:55 crc kubenswrapper[4955]: > Feb 02 13:17:55 crc kubenswrapper[4955]: I0202 13:17:55.515068 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 02 13:17:57 crc kubenswrapper[4955]: I0202 13:17:57.054000 4955 generic.go:334] "Generic (PLEG): container finished" podID="827225b6-1672-40b1-a9ee-7dd2d5db2d1d" containerID="216837147ce032310375d7abc39ae39496834919c5c5e8977850993a32772a6f" exitCode=0 Feb 02 13:17:57 crc kubenswrapper[4955]: I0202 13:17:57.054287 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"827225b6-1672-40b1-a9ee-7dd2d5db2d1d","Type":"ContainerDied","Data":"216837147ce032310375d7abc39ae39496834919c5c5e8977850993a32772a6f"} Feb 02 13:17:57 crc kubenswrapper[4955]: I0202 13:17:57.591370 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n2g8w" Feb 02 13:17:58 crc kubenswrapper[4955]: I0202 13:17:58.101136 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wl9t7"] Feb 02 13:17:58 crc kubenswrapper[4955]: I0202 13:17:58.108883 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-wl9t7"] Feb 02 13:17:58 crc kubenswrapper[4955]: I0202 13:17:58.397380 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n2g8w"] Feb 02 13:17:58 crc kubenswrapper[4955]: I0202 13:17:58.397615 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n2g8w" podUID="735a40d7-b7b4-461b-b99f-6557672748e7" containerName="registry-server" containerID="cri-o://04d4792136abce9307afa6296c2f76e3202ed0ccb468d6362454cdd2d7daeb24" gracePeriod=2 Feb 02 13:17:59 crc kubenswrapper[4955]: I0202 13:17:59.071254 4955 generic.go:334] "Generic (PLEG): container finished" podID="735a40d7-b7b4-461b-b99f-6557672748e7" containerID="04d4792136abce9307afa6296c2f76e3202ed0ccb468d6362454cdd2d7daeb24" exitCode=0 Feb 02 13:17:59 crc kubenswrapper[4955]: I0202 13:17:59.071461 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2g8w" event={"ID":"735a40d7-b7b4-461b-b99f-6557672748e7","Type":"ContainerDied","Data":"04d4792136abce9307afa6296c2f76e3202ed0ccb468d6362454cdd2d7daeb24"} Feb 02 13:17:59 crc kubenswrapper[4955]: I0202 13:17:59.726994 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="320e0fb7-ed3a-4650-b474-62da91b401ee" path="/var/lib/kubelet/pods/320e0fb7-ed3a-4650-b474-62da91b401ee/volumes" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.029756 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-etc-swift\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.036480 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ec1a6503-248d-4f72-a3ab-e23df2ca163d-etc-swift\") pod \"swift-storage-0\" (UID: \"ec1a6503-248d-4f72-a3ab-e23df2ca163d\") " pod="openstack/swift-storage-0" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.129226 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-p7twj" podUID="480861f6-44ea-41c3-806e-497f3177eb91" containerName="ovn-controller" probeResult="failure" output=< Feb 02 13:18:00 crc kubenswrapper[4955]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 13:18:00 crc kubenswrapper[4955]: > Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.210708 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.230922 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-h5j4t" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.280519 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.460531 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-p7twj-config-wjb2m"] Feb 02 13:18:00 crc kubenswrapper[4955]: E0202 13:18:00.460934 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="320e0fb7-ed3a-4650-b474-62da91b401ee" containerName="mariadb-account-create-update" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.460961 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="320e0fb7-ed3a-4650-b474-62da91b401ee" containerName="mariadb-account-create-update" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.461184 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="320e0fb7-ed3a-4650-b474-62da91b401ee" containerName="mariadb-account-create-update" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.461822 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p7twj-config-wjb2m" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.464423 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.484728 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-p7twj-config-wjb2m"] Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.641795 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-additional-scripts\") pod \"ovn-controller-p7twj-config-wjb2m\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " pod="openstack/ovn-controller-p7twj-config-wjb2m" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.641876 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-var-run-ovn\") pod \"ovn-controller-p7twj-config-wjb2m\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " pod="openstack/ovn-controller-p7twj-config-wjb2m" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.641917 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-var-log-ovn\") pod \"ovn-controller-p7twj-config-wjb2m\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " pod="openstack/ovn-controller-p7twj-config-wjb2m" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.641940 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-scripts\") pod \"ovn-controller-p7twj-config-wjb2m\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " pod="openstack/ovn-controller-p7twj-config-wjb2m" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.641972 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-var-run\") pod \"ovn-controller-p7twj-config-wjb2m\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " pod="openstack/ovn-controller-p7twj-config-wjb2m" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.641989 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qshnx\" (UniqueName: \"kubernetes.io/projected/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-kube-api-access-qshnx\") pod \"ovn-controller-p7twj-config-wjb2m\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " pod="openstack/ovn-controller-p7twj-config-wjb2m" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.743459 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-var-run-ovn\") pod \"ovn-controller-p7twj-config-wjb2m\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " pod="openstack/ovn-controller-p7twj-config-wjb2m" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.743514 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-var-log-ovn\") pod \"ovn-controller-p7twj-config-wjb2m\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " pod="openstack/ovn-controller-p7twj-config-wjb2m" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.743540 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-scripts\") pod \"ovn-controller-p7twj-config-wjb2m\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " pod="openstack/ovn-controller-p7twj-config-wjb2m" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.743593 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-var-run\") pod \"ovn-controller-p7twj-config-wjb2m\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " pod="openstack/ovn-controller-p7twj-config-wjb2m" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.743618 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qshnx\" (UniqueName: \"kubernetes.io/projected/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-kube-api-access-qshnx\") pod \"ovn-controller-p7twj-config-wjb2m\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " pod="openstack/ovn-controller-p7twj-config-wjb2m" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.743717 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-additional-scripts\") pod \"ovn-controller-p7twj-config-wjb2m\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " pod="openstack/ovn-controller-p7twj-config-wjb2m" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.744017 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-var-log-ovn\") pod \"ovn-controller-p7twj-config-wjb2m\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " pod="openstack/ovn-controller-p7twj-config-wjb2m" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.744072 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-var-run-ovn\") pod \"ovn-controller-p7twj-config-wjb2m\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " pod="openstack/ovn-controller-p7twj-config-wjb2m" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.744095 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-var-run\") pod \"ovn-controller-p7twj-config-wjb2m\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " pod="openstack/ovn-controller-p7twj-config-wjb2m" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.744535 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-additional-scripts\") pod \"ovn-controller-p7twj-config-wjb2m\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " pod="openstack/ovn-controller-p7twj-config-wjb2m" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.745652 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-scripts\") pod \"ovn-controller-p7twj-config-wjb2m\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " pod="openstack/ovn-controller-p7twj-config-wjb2m" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.765480 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qshnx\" (UniqueName: \"kubernetes.io/projected/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-kube-api-access-qshnx\") pod \"ovn-controller-p7twj-config-wjb2m\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " pod="openstack/ovn-controller-p7twj-config-wjb2m" Feb 02 13:18:00 crc kubenswrapper[4955]: I0202 13:18:00.797941 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p7twj-config-wjb2m" Feb 02 13:18:03 crc kubenswrapper[4955]: I0202 13:18:03.016609 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:18:03 crc kubenswrapper[4955]: I0202 13:18:03.016994 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:18:03 crc kubenswrapper[4955]: I0202 13:18:03.100248 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-82h78"] Feb 02 13:18:03 crc kubenswrapper[4955]: I0202 13:18:03.101409 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-82h78" Feb 02 13:18:03 crc kubenswrapper[4955]: I0202 13:18:03.103226 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 02 13:18:03 crc kubenswrapper[4955]: I0202 13:18:03.112506 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-82h78"] Feb 02 13:18:03 crc kubenswrapper[4955]: I0202 13:18:03.202620 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqcs7\" (UniqueName: \"kubernetes.io/projected/7aba3431-7e59-4d8e-9205-071948d70a8a-kube-api-access-bqcs7\") pod \"root-account-create-update-82h78\" (UID: \"7aba3431-7e59-4d8e-9205-071948d70a8a\") " pod="openstack/root-account-create-update-82h78" Feb 02 13:18:03 crc kubenswrapper[4955]: I0202 13:18:03.202694 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aba3431-7e59-4d8e-9205-071948d70a8a-operator-scripts\") pod \"root-account-create-update-82h78\" (UID: \"7aba3431-7e59-4d8e-9205-071948d70a8a\") " pod="openstack/root-account-create-update-82h78" Feb 02 13:18:03 crc kubenswrapper[4955]: I0202 13:18:03.310673 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqcs7\" (UniqueName: \"kubernetes.io/projected/7aba3431-7e59-4d8e-9205-071948d70a8a-kube-api-access-bqcs7\") pod \"root-account-create-update-82h78\" (UID: \"7aba3431-7e59-4d8e-9205-071948d70a8a\") " pod="openstack/root-account-create-update-82h78" Feb 02 13:18:03 crc kubenswrapper[4955]: I0202 13:18:03.310794 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aba3431-7e59-4d8e-9205-071948d70a8a-operator-scripts\") pod \"root-account-create-update-82h78\" (UID: \"7aba3431-7e59-4d8e-9205-071948d70a8a\") " pod="openstack/root-account-create-update-82h78" Feb 02 13:18:03 crc kubenswrapper[4955]: I0202 13:18:03.312212 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aba3431-7e59-4d8e-9205-071948d70a8a-operator-scripts\") pod \"root-account-create-update-82h78\" (UID: \"7aba3431-7e59-4d8e-9205-071948d70a8a\") " pod="openstack/root-account-create-update-82h78" Feb 02 13:18:03 crc kubenswrapper[4955]: I0202 13:18:03.336067 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqcs7\" (UniqueName: \"kubernetes.io/projected/7aba3431-7e59-4d8e-9205-071948d70a8a-kube-api-access-bqcs7\") pod \"root-account-create-update-82h78\" (UID: \"7aba3431-7e59-4d8e-9205-071948d70a8a\") " pod="openstack/root-account-create-update-82h78" Feb 02 13:18:03 crc kubenswrapper[4955]: I0202 13:18:03.424966 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-82h78" Feb 02 13:18:05 crc kubenswrapper[4955]: I0202 13:18:05.107753 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-p7twj" podUID="480861f6-44ea-41c3-806e-497f3177eb91" containerName="ovn-controller" probeResult="failure" output=< Feb 02 13:18:05 crc kubenswrapper[4955]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 13:18:05 crc kubenswrapper[4955]: > Feb 02 13:18:05 crc kubenswrapper[4955]: I0202 13:18:05.311927 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n2g8w" Feb 02 13:18:05 crc kubenswrapper[4955]: I0202 13:18:05.446192 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/735a40d7-b7b4-461b-b99f-6557672748e7-utilities\") pod \"735a40d7-b7b4-461b-b99f-6557672748e7\" (UID: \"735a40d7-b7b4-461b-b99f-6557672748e7\") " Feb 02 13:18:05 crc kubenswrapper[4955]: I0202 13:18:05.446326 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/735a40d7-b7b4-461b-b99f-6557672748e7-catalog-content\") pod \"735a40d7-b7b4-461b-b99f-6557672748e7\" (UID: \"735a40d7-b7b4-461b-b99f-6557672748e7\") " Feb 02 13:18:05 crc kubenswrapper[4955]: I0202 13:18:05.446392 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95mrw\" (UniqueName: \"kubernetes.io/projected/735a40d7-b7b4-461b-b99f-6557672748e7-kube-api-access-95mrw\") pod \"735a40d7-b7b4-461b-b99f-6557672748e7\" (UID: \"735a40d7-b7b4-461b-b99f-6557672748e7\") " Feb 02 13:18:05 crc kubenswrapper[4955]: I0202 13:18:05.447784 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/735a40d7-b7b4-461b-b99f-6557672748e7-utilities" (OuterVolumeSpecName: "utilities") pod "735a40d7-b7b4-461b-b99f-6557672748e7" (UID: "735a40d7-b7b4-461b-b99f-6557672748e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:18:05 crc kubenswrapper[4955]: I0202 13:18:05.451206 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/735a40d7-b7b4-461b-b99f-6557672748e7-kube-api-access-95mrw" (OuterVolumeSpecName: "kube-api-access-95mrw") pod "735a40d7-b7b4-461b-b99f-6557672748e7" (UID: "735a40d7-b7b4-461b-b99f-6557672748e7"). InnerVolumeSpecName "kube-api-access-95mrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:18:05 crc kubenswrapper[4955]: I0202 13:18:05.509401 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/735a40d7-b7b4-461b-b99f-6557672748e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "735a40d7-b7b4-461b-b99f-6557672748e7" (UID: "735a40d7-b7b4-461b-b99f-6557672748e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:18:05 crc kubenswrapper[4955]: I0202 13:18:05.550399 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/735a40d7-b7b4-461b-b99f-6557672748e7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:05 crc kubenswrapper[4955]: I0202 13:18:05.550441 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95mrw\" (UniqueName: \"kubernetes.io/projected/735a40d7-b7b4-461b-b99f-6557672748e7-kube-api-access-95mrw\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:05 crc kubenswrapper[4955]: I0202 13:18:05.550459 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/735a40d7-b7b4-461b-b99f-6557672748e7-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:05 crc kubenswrapper[4955]: I0202 13:18:05.604940 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-82h78"] Feb 02 13:18:05 crc kubenswrapper[4955]: I0202 13:18:05.665295 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-p7twj-config-wjb2m"] Feb 02 13:18:05 crc kubenswrapper[4955]: W0202 13:18:05.699126 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb29feccf_7d0b_4ee1_8e0b_c25c33fb3be5.slice/crio-ab66f43a91602ba6cd6fd0281e03585ed9b2e5a15c535654493155a46787423f WatchSource:0}: Error finding container ab66f43a91602ba6cd6fd0281e03585ed9b2e5a15c535654493155a46787423f: Status 404 returned error can't find the container with id ab66f43a91602ba6cd6fd0281e03585ed9b2e5a15c535654493155a46787423f Feb 02 13:18:05 crc kubenswrapper[4955]: I0202 13:18:05.777922 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.116256 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hpct9"] Feb 02 13:18:06 crc kubenswrapper[4955]: E0202 13:18:06.116888 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735a40d7-b7b4-461b-b99f-6557672748e7" containerName="registry-server" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.116902 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="735a40d7-b7b4-461b-b99f-6557672748e7" containerName="registry-server" Feb 02 13:18:06 crc kubenswrapper[4955]: E0202 13:18:06.116924 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735a40d7-b7b4-461b-b99f-6557672748e7" containerName="extract-utilities" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.116931 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="735a40d7-b7b4-461b-b99f-6557672748e7" containerName="extract-utilities" Feb 02 13:18:06 crc kubenswrapper[4955]: E0202 13:18:06.116940 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735a40d7-b7b4-461b-b99f-6557672748e7" containerName="extract-content" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.116946 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="735a40d7-b7b4-461b-b99f-6557672748e7" containerName="extract-content" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.117109 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="735a40d7-b7b4-461b-b99f-6557672748e7" containerName="registry-server" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.118289 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpct9" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.128318 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hpct9"] Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.160080 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"827225b6-1672-40b1-a9ee-7dd2d5db2d1d","Type":"ContainerStarted","Data":"b12b8e45ce088705b73de1339228a4a64a431496ca2972801565bfbdc998a238"} Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.160389 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.164981 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-82h78" event={"ID":"7aba3431-7e59-4d8e-9205-071948d70a8a","Type":"ContainerStarted","Data":"c8be1efccc92b4fbd85ead057b56cb853a94cc68e72028a05c3e87430a9f375c"} Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.165018 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-82h78" event={"ID":"7aba3431-7e59-4d8e-9205-071948d70a8a","Type":"ContainerStarted","Data":"866ec32f82c81819d547c0acecf6ce4366e9a88ee7b85898518bc2f4ffaba264"} Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.168255 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p7twj-config-wjb2m" event={"ID":"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5","Type":"ContainerStarted","Data":"968da7fd198b59ffa6572be639223f876099280314ee40d775d4f1fe9de0b33d"} Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.168310 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p7twj-config-wjb2m" event={"ID":"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5","Type":"ContainerStarted","Data":"ab66f43a91602ba6cd6fd0281e03585ed9b2e5a15c535654493155a46787423f"} Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.169604 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec1a6503-248d-4f72-a3ab-e23df2ca163d","Type":"ContainerStarted","Data":"afd894cdf90ceae1f5c4ec7cc27f6e83d9aee594950c6358e33a64b123e81182"} Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.171010 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4r26c" event={"ID":"d5111fc8-b31a-4644-aae9-5a89e4d5da9a","Type":"ContainerStarted","Data":"1feb7d9e24726f65a8dc626bd665e9c2fca46e0ac33505c58d0d65765b4593b0"} Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.177797 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2g8w" event={"ID":"735a40d7-b7b4-461b-b99f-6557672748e7","Type":"ContainerDied","Data":"9333eb086d853de1a39b209aa6329aac6c90be3dc44ebb10ba3048ab1c34236c"} Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.177856 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n2g8w" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.177865 4955 scope.go:117] "RemoveContainer" containerID="04d4792136abce9307afa6296c2f76e3202ed0ccb468d6362454cdd2d7daeb24" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.189171 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371960.665625 podStartE2EDuration="1m16.189150961s" podCreationTimestamp="2026-02-02 13:16:50 +0000 UTC" firstStartedPulling="2026-02-02 13:16:53.111188908 +0000 UTC m=+864.023525358" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:18:06.185922143 +0000 UTC m=+937.098258593" watchObservedRunningTime="2026-02-02 13:18:06.189150961 +0000 UTC m=+937.101487411" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.209491 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-82h78" podStartSLOduration=3.209468732 podStartE2EDuration="3.209468732s" podCreationTimestamp="2026-02-02 13:18:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:18:06.205329522 +0000 UTC m=+937.117665972" watchObservedRunningTime="2026-02-02 13:18:06.209468732 +0000 UTC m=+937.121805182" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.221104 4955 scope.go:117] "RemoveContainer" containerID="ad346f7a213e411c8bddced5354baf867b3d1a3b21454c065931cb64b75d6d33" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.227903 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-4r26c" podStartSLOduration=2.400394692 podStartE2EDuration="16.227885487s" podCreationTimestamp="2026-02-02 13:17:50 +0000 UTC" firstStartedPulling="2026-02-02 13:17:51.364957185 +0000 UTC m=+922.277293635" lastFinishedPulling="2026-02-02 13:18:05.19244798 +0000 UTC m=+936.104784430" observedRunningTime="2026-02-02 13:18:06.217997268 +0000 UTC m=+937.130333718" watchObservedRunningTime="2026-02-02 13:18:06.227885487 +0000 UTC m=+937.140221937" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.246008 4955 scope.go:117] "RemoveContainer" containerID="0db8154f17039b7ef107155f12d12717bc6bff63ece06f3ebd3924ad281e99e7" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.247216 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-p7twj-config-wjb2m" podStartSLOduration=6.247198163 podStartE2EDuration="6.247198163s" podCreationTimestamp="2026-02-02 13:18:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:18:06.242139781 +0000 UTC m=+937.154476231" watchObservedRunningTime="2026-02-02 13:18:06.247198163 +0000 UTC m=+937.159534613" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.260693 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n2g8w"] Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.261717 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c399cd3-81fb-4625-8d18-ce2bb6c0b72e-catalog-content\") pod \"redhat-operators-hpct9\" (UID: \"5c399cd3-81fb-4625-8d18-ce2bb6c0b72e\") " pod="openshift-marketplace/redhat-operators-hpct9" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.261812 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snxnk\" (UniqueName: \"kubernetes.io/projected/5c399cd3-81fb-4625-8d18-ce2bb6c0b72e-kube-api-access-snxnk\") pod \"redhat-operators-hpct9\" (UID: \"5c399cd3-81fb-4625-8d18-ce2bb6c0b72e\") " pod="openshift-marketplace/redhat-operators-hpct9" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.261851 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c399cd3-81fb-4625-8d18-ce2bb6c0b72e-utilities\") pod \"redhat-operators-hpct9\" (UID: \"5c399cd3-81fb-4625-8d18-ce2bb6c0b72e\") " pod="openshift-marketplace/redhat-operators-hpct9" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.268150 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n2g8w"] Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.363896 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c399cd3-81fb-4625-8d18-ce2bb6c0b72e-catalog-content\") pod \"redhat-operators-hpct9\" (UID: \"5c399cd3-81fb-4625-8d18-ce2bb6c0b72e\") " pod="openshift-marketplace/redhat-operators-hpct9" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.363962 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snxnk\" (UniqueName: \"kubernetes.io/projected/5c399cd3-81fb-4625-8d18-ce2bb6c0b72e-kube-api-access-snxnk\") pod \"redhat-operators-hpct9\" (UID: \"5c399cd3-81fb-4625-8d18-ce2bb6c0b72e\") " pod="openshift-marketplace/redhat-operators-hpct9" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.363988 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c399cd3-81fb-4625-8d18-ce2bb6c0b72e-utilities\") pod \"redhat-operators-hpct9\" (UID: \"5c399cd3-81fb-4625-8d18-ce2bb6c0b72e\") " pod="openshift-marketplace/redhat-operators-hpct9" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.365139 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c399cd3-81fb-4625-8d18-ce2bb6c0b72e-catalog-content\") pod \"redhat-operators-hpct9\" (UID: \"5c399cd3-81fb-4625-8d18-ce2bb6c0b72e\") " pod="openshift-marketplace/redhat-operators-hpct9" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.365925 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c399cd3-81fb-4625-8d18-ce2bb6c0b72e-utilities\") pod \"redhat-operators-hpct9\" (UID: \"5c399cd3-81fb-4625-8d18-ce2bb6c0b72e\") " pod="openshift-marketplace/redhat-operators-hpct9" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.411491 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snxnk\" (UniqueName: \"kubernetes.io/projected/5c399cd3-81fb-4625-8d18-ce2bb6c0b72e-kube-api-access-snxnk\") pod \"redhat-operators-hpct9\" (UID: \"5c399cd3-81fb-4625-8d18-ce2bb6c0b72e\") " pod="openshift-marketplace/redhat-operators-hpct9" Feb 02 13:18:06 crc kubenswrapper[4955]: I0202 13:18:06.442985 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpct9" Feb 02 13:18:07 crc kubenswrapper[4955]: I0202 13:18:07.105471 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hpct9"] Feb 02 13:18:07 crc kubenswrapper[4955]: I0202 13:18:07.189101 4955 generic.go:334] "Generic (PLEG): container finished" podID="7aba3431-7e59-4d8e-9205-071948d70a8a" containerID="c8be1efccc92b4fbd85ead057b56cb853a94cc68e72028a05c3e87430a9f375c" exitCode=0 Feb 02 13:18:07 crc kubenswrapper[4955]: I0202 13:18:07.189180 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-82h78" event={"ID":"7aba3431-7e59-4d8e-9205-071948d70a8a","Type":"ContainerDied","Data":"c8be1efccc92b4fbd85ead057b56cb853a94cc68e72028a05c3e87430a9f375c"} Feb 02 13:18:07 crc kubenswrapper[4955]: I0202 13:18:07.192179 4955 generic.go:334] "Generic (PLEG): container finished" podID="b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5" containerID="968da7fd198b59ffa6572be639223f876099280314ee40d775d4f1fe9de0b33d" exitCode=0 Feb 02 13:18:07 crc kubenswrapper[4955]: I0202 13:18:07.192234 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p7twj-config-wjb2m" event={"ID":"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5","Type":"ContainerDied","Data":"968da7fd198b59ffa6572be639223f876099280314ee40d775d4f1fe9de0b33d"} Feb 02 13:18:07 crc kubenswrapper[4955]: I0202 13:18:07.736882 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="735a40d7-b7b4-461b-b99f-6557672748e7" path="/var/lib/kubelet/pods/735a40d7-b7b4-461b-b99f-6557672748e7/volumes" Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.207876 4955 generic.go:334] "Generic (PLEG): container finished" podID="5c399cd3-81fb-4625-8d18-ce2bb6c0b72e" containerID="9d38c30f4539e1e834a176291ccf2b36a806b8144cad8e4dcc251766caf418c2" exitCode=0 Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.210354 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpct9" event={"ID":"5c399cd3-81fb-4625-8d18-ce2bb6c0b72e","Type":"ContainerDied","Data":"9d38c30f4539e1e834a176291ccf2b36a806b8144cad8e4dcc251766caf418c2"} Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.210411 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpct9" event={"ID":"5c399cd3-81fb-4625-8d18-ce2bb6c0b72e","Type":"ContainerStarted","Data":"2fb83f6923219d9f7d9868184232e917db24dc15eb53afcce2c9b31abbdc135a"} Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.220174 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec1a6503-248d-4f72-a3ab-e23df2ca163d","Type":"ContainerStarted","Data":"415338c0c4090c8649049500b49b095d50fb4e4096cfc14755fc505f48a410d8"} Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.220225 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec1a6503-248d-4f72-a3ab-e23df2ca163d","Type":"ContainerStarted","Data":"cf62c05744d186079c9c89a39b786cde8c82162d5c59f593287aa21dfba52bfc"} Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.542432 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-82h78" Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.666275 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p7twj-config-wjb2m" Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.714113 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqcs7\" (UniqueName: \"kubernetes.io/projected/7aba3431-7e59-4d8e-9205-071948d70a8a-kube-api-access-bqcs7\") pod \"7aba3431-7e59-4d8e-9205-071948d70a8a\" (UID: \"7aba3431-7e59-4d8e-9205-071948d70a8a\") " Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.714203 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aba3431-7e59-4d8e-9205-071948d70a8a-operator-scripts\") pod \"7aba3431-7e59-4d8e-9205-071948d70a8a\" (UID: \"7aba3431-7e59-4d8e-9205-071948d70a8a\") " Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.715662 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aba3431-7e59-4d8e-9205-071948d70a8a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7aba3431-7e59-4d8e-9205-071948d70a8a" (UID: "7aba3431-7e59-4d8e-9205-071948d70a8a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.720911 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aba3431-7e59-4d8e-9205-071948d70a8a-kube-api-access-bqcs7" (OuterVolumeSpecName: "kube-api-access-bqcs7") pod "7aba3431-7e59-4d8e-9205-071948d70a8a" (UID: "7aba3431-7e59-4d8e-9205-071948d70a8a"). InnerVolumeSpecName "kube-api-access-bqcs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.815729 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-var-run-ovn\") pod \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.816026 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-var-run\") pod \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.816150 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qshnx\" (UniqueName: \"kubernetes.io/projected/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-kube-api-access-qshnx\") pod \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.816827 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-additional-scripts\") pod \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.816941 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-var-log-ovn\") pod \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.817074 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-scripts\") pod \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\" (UID: \"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5\") " Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.817690 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aba3431-7e59-4d8e-9205-071948d70a8a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.817770 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqcs7\" (UniqueName: \"kubernetes.io/projected/7aba3431-7e59-4d8e-9205-071948d70a8a-kube-api-access-bqcs7\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.815985 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5" (UID: "b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.816103 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-var-run" (OuterVolumeSpecName: "var-run") pod "b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5" (UID: "b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.817761 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5" (UID: "b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.817774 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5" (UID: "b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.818959 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-scripts" (OuterVolumeSpecName: "scripts") pod "b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5" (UID: "b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.819637 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-kube-api-access-qshnx" (OuterVolumeSpecName: "kube-api-access-qshnx") pod "b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5" (UID: "b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5"). InnerVolumeSpecName "kube-api-access-qshnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.919748 4955 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.919772 4955 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.919782 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qshnx\" (UniqueName: \"kubernetes.io/projected/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-kube-api-access-qshnx\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.919791 4955 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.919800 4955 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:08 crc kubenswrapper[4955]: I0202 13:18:08.919811 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:09 crc kubenswrapper[4955]: I0202 13:18:09.226939 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-82h78" Feb 02 13:18:09 crc kubenswrapper[4955]: I0202 13:18:09.226940 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-82h78" event={"ID":"7aba3431-7e59-4d8e-9205-071948d70a8a","Type":"ContainerDied","Data":"866ec32f82c81819d547c0acecf6ce4366e9a88ee7b85898518bc2f4ffaba264"} Feb 02 13:18:09 crc kubenswrapper[4955]: I0202 13:18:09.227104 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="866ec32f82c81819d547c0acecf6ce4366e9a88ee7b85898518bc2f4ffaba264" Feb 02 13:18:09 crc kubenswrapper[4955]: I0202 13:18:09.229044 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec1a6503-248d-4f72-a3ab-e23df2ca163d","Type":"ContainerStarted","Data":"8544754c7c022c7ec2af933482500a7af76ad6a37ed9d95672ff2780bff41a26"} Feb 02 13:18:09 crc kubenswrapper[4955]: I0202 13:18:09.229086 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec1a6503-248d-4f72-a3ab-e23df2ca163d","Type":"ContainerStarted","Data":"aaf2b150dc0d0ca37e813e0e68af38db4483c5364737aae757217baf65766c32"} Feb 02 13:18:09 crc kubenswrapper[4955]: I0202 13:18:09.230032 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p7twj-config-wjb2m" event={"ID":"b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5","Type":"ContainerDied","Data":"ab66f43a91602ba6cd6fd0281e03585ed9b2e5a15c535654493155a46787423f"} Feb 02 13:18:09 crc kubenswrapper[4955]: I0202 13:18:09.230068 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab66f43a91602ba6cd6fd0281e03585ed9b2e5a15c535654493155a46787423f" Feb 02 13:18:09 crc kubenswrapper[4955]: I0202 13:18:09.230118 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p7twj-config-wjb2m" Feb 02 13:18:09 crc kubenswrapper[4955]: I0202 13:18:09.760534 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-p7twj-config-wjb2m"] Feb 02 13:18:09 crc kubenswrapper[4955]: I0202 13:18:09.772230 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-p7twj-config-wjb2m"] Feb 02 13:18:10 crc kubenswrapper[4955]: I0202 13:18:10.091876 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-p7twj" Feb 02 13:18:11 crc kubenswrapper[4955]: I0202 13:18:11.701734 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 02 13:18:11 crc kubenswrapper[4955]: I0202 13:18:11.727404 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5" path="/var/lib/kubelet/pods/b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5/volumes" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.084249 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-nzgmz"] Feb 02 13:18:12 crc kubenswrapper[4955]: E0202 13:18:12.084539 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5" containerName="ovn-config" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.084574 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5" containerName="ovn-config" Feb 02 13:18:12 crc kubenswrapper[4955]: E0202 13:18:12.084591 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aba3431-7e59-4d8e-9205-071948d70a8a" containerName="mariadb-account-create-update" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.084597 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aba3431-7e59-4d8e-9205-071948d70a8a" containerName="mariadb-account-create-update" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.084739 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aba3431-7e59-4d8e-9205-071948d70a8a" containerName="mariadb-account-create-update" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.084759 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="b29feccf-7d0b-4ee1-8e0b-c25c33fb3be5" containerName="ovn-config" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.085238 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nzgmz" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.094854 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-nzgmz"] Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.173158 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22defd53-64bc-47b9-86e8-21563ec3a37f-operator-scripts\") pod \"cinder-db-create-nzgmz\" (UID: \"22defd53-64bc-47b9-86e8-21563ec3a37f\") " pod="openstack/cinder-db-create-nzgmz" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.173235 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z75h\" (UniqueName: \"kubernetes.io/projected/22defd53-64bc-47b9-86e8-21563ec3a37f-kube-api-access-8z75h\") pod \"cinder-db-create-nzgmz\" (UID: \"22defd53-64bc-47b9-86e8-21563ec3a37f\") " pod="openstack/cinder-db-create-nzgmz" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.190320 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-7474-account-create-update-4lqrk"] Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.191476 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7474-account-create-update-4lqrk" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.193574 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.201672 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-gfs47"] Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.202678 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gfs47" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.212805 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7474-account-create-update-4lqrk"] Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.266450 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-gfs47"] Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.274568 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z75h\" (UniqueName: \"kubernetes.io/projected/22defd53-64bc-47b9-86e8-21563ec3a37f-kube-api-access-8z75h\") pod \"cinder-db-create-nzgmz\" (UID: \"22defd53-64bc-47b9-86e8-21563ec3a37f\") " pod="openstack/cinder-db-create-nzgmz" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.274690 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22defd53-64bc-47b9-86e8-21563ec3a37f-operator-scripts\") pod \"cinder-db-create-nzgmz\" (UID: \"22defd53-64bc-47b9-86e8-21563ec3a37f\") " pod="openstack/cinder-db-create-nzgmz" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.277290 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22defd53-64bc-47b9-86e8-21563ec3a37f-operator-scripts\") pod \"cinder-db-create-nzgmz\" (UID: \"22defd53-64bc-47b9-86e8-21563ec3a37f\") " pod="openstack/cinder-db-create-nzgmz" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.312484 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-g8c5d"] Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.313781 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-g8c5d" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.325105 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z75h\" (UniqueName: \"kubernetes.io/projected/22defd53-64bc-47b9-86e8-21563ec3a37f-kube-api-access-8z75h\") pod \"cinder-db-create-nzgmz\" (UID: \"22defd53-64bc-47b9-86e8-21563ec3a37f\") " pod="openstack/cinder-db-create-nzgmz" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.325168 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-21d7-account-create-update-l26fg"] Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.326240 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-21d7-account-create-update-l26fg" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.344363 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-g8c5d"] Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.348996 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.374673 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-21d7-account-create-update-l26fg"] Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.376784 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cda69844-00d2-4981-bfbd-1d4ed05274d1-operator-scripts\") pod \"barbican-7474-account-create-update-4lqrk\" (UID: \"cda69844-00d2-4981-bfbd-1d4ed05274d1\") " pod="openstack/barbican-7474-account-create-update-4lqrk" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.376874 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-654ll\" (UniqueName: \"kubernetes.io/projected/dea27d86-5db9-49fd-b9bf-44176e78d3d6-kube-api-access-654ll\") pod \"barbican-db-create-gfs47\" (UID: \"dea27d86-5db9-49fd-b9bf-44176e78d3d6\") " pod="openstack/barbican-db-create-gfs47" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.376912 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dea27d86-5db9-49fd-b9bf-44176e78d3d6-operator-scripts\") pod \"barbican-db-create-gfs47\" (UID: \"dea27d86-5db9-49fd-b9bf-44176e78d3d6\") " pod="openstack/barbican-db-create-gfs47" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.376959 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrt26\" (UniqueName: \"kubernetes.io/projected/cda69844-00d2-4981-bfbd-1d4ed05274d1-kube-api-access-wrt26\") pod \"barbican-7474-account-create-update-4lqrk\" (UID: \"cda69844-00d2-4981-bfbd-1d4ed05274d1\") " pod="openstack/barbican-7474-account-create-update-4lqrk" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.403366 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nzgmz" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.414040 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-600f-account-create-update-777t8"] Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.415283 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-600f-account-create-update-777t8" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.418597 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.428667 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-600f-account-create-update-777t8"] Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.482393 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z54qd\" (UniqueName: \"kubernetes.io/projected/3544eea9-736e-471c-85b4-b59aab2d5533-kube-api-access-z54qd\") pod \"heat-21d7-account-create-update-l26fg\" (UID: \"3544eea9-736e-471c-85b4-b59aab2d5533\") " pod="openstack/heat-21d7-account-create-update-l26fg" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.482450 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-654ll\" (UniqueName: \"kubernetes.io/projected/dea27d86-5db9-49fd-b9bf-44176e78d3d6-kube-api-access-654ll\") pod \"barbican-db-create-gfs47\" (UID: \"dea27d86-5db9-49fd-b9bf-44176e78d3d6\") " pod="openstack/barbican-db-create-gfs47" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.482477 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dea27d86-5db9-49fd-b9bf-44176e78d3d6-operator-scripts\") pod \"barbican-db-create-gfs47\" (UID: \"dea27d86-5db9-49fd-b9bf-44176e78d3d6\") " pod="openstack/barbican-db-create-gfs47" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.482502 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3544eea9-736e-471c-85b4-b59aab2d5533-operator-scripts\") pod \"heat-21d7-account-create-update-l26fg\" (UID: \"3544eea9-736e-471c-85b4-b59aab2d5533\") " pod="openstack/heat-21d7-account-create-update-l26fg" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.482530 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnjmw\" (UniqueName: \"kubernetes.io/projected/fe99b176-e998-4cdd-9cef-32407153cc79-kube-api-access-lnjmw\") pod \"heat-db-create-g8c5d\" (UID: \"fe99b176-e998-4cdd-9cef-32407153cc79\") " pod="openstack/heat-db-create-g8c5d" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.482569 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrt26\" (UniqueName: \"kubernetes.io/projected/cda69844-00d2-4981-bfbd-1d4ed05274d1-kube-api-access-wrt26\") pod \"barbican-7474-account-create-update-4lqrk\" (UID: \"cda69844-00d2-4981-bfbd-1d4ed05274d1\") " pod="openstack/barbican-7474-account-create-update-4lqrk" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.482596 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe99b176-e998-4cdd-9cef-32407153cc79-operator-scripts\") pod \"heat-db-create-g8c5d\" (UID: \"fe99b176-e998-4cdd-9cef-32407153cc79\") " pod="openstack/heat-db-create-g8c5d" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.482727 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cda69844-00d2-4981-bfbd-1d4ed05274d1-operator-scripts\") pod \"barbican-7474-account-create-update-4lqrk\" (UID: \"cda69844-00d2-4981-bfbd-1d4ed05274d1\") " pod="openstack/barbican-7474-account-create-update-4lqrk" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.483335 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cda69844-00d2-4981-bfbd-1d4ed05274d1-operator-scripts\") pod \"barbican-7474-account-create-update-4lqrk\" (UID: \"cda69844-00d2-4981-bfbd-1d4ed05274d1\") " pod="openstack/barbican-7474-account-create-update-4lqrk" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.484012 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dea27d86-5db9-49fd-b9bf-44176e78d3d6-operator-scripts\") pod \"barbican-db-create-gfs47\" (UID: \"dea27d86-5db9-49fd-b9bf-44176e78d3d6\") " pod="openstack/barbican-db-create-gfs47" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.522736 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrt26\" (UniqueName: \"kubernetes.io/projected/cda69844-00d2-4981-bfbd-1d4ed05274d1-kube-api-access-wrt26\") pod \"barbican-7474-account-create-update-4lqrk\" (UID: \"cda69844-00d2-4981-bfbd-1d4ed05274d1\") " pod="openstack/barbican-7474-account-create-update-4lqrk" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.527609 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-6xnc6"] Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.528670 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6xnc6" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.532916 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-654ll\" (UniqueName: \"kubernetes.io/projected/dea27d86-5db9-49fd-b9bf-44176e78d3d6-kube-api-access-654ll\") pod \"barbican-db-create-gfs47\" (UID: \"dea27d86-5db9-49fd-b9bf-44176e78d3d6\") " pod="openstack/barbican-db-create-gfs47" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.533311 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.533528 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.533660 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jnmgd" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.533868 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.535058 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6xnc6"] Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.585092 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3544eea9-736e-471c-85b4-b59aab2d5533-operator-scripts\") pod \"heat-21d7-account-create-update-l26fg\" (UID: \"3544eea9-736e-471c-85b4-b59aab2d5533\") " pod="openstack/heat-21d7-account-create-update-l26fg" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.585438 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnjmw\" (UniqueName: \"kubernetes.io/projected/fe99b176-e998-4cdd-9cef-32407153cc79-kube-api-access-lnjmw\") pod \"heat-db-create-g8c5d\" (UID: \"fe99b176-e998-4cdd-9cef-32407153cc79\") " pod="openstack/heat-db-create-g8c5d" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.585499 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe99b176-e998-4cdd-9cef-32407153cc79-operator-scripts\") pod \"heat-db-create-g8c5d\" (UID: \"fe99b176-e998-4cdd-9cef-32407153cc79\") " pod="openstack/heat-db-create-g8c5d" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.585592 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27d03337-7374-44ab-8c95-d092c42d2355-operator-scripts\") pod \"cinder-600f-account-create-update-777t8\" (UID: \"27d03337-7374-44ab-8c95-d092c42d2355\") " pod="openstack/cinder-600f-account-create-update-777t8" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.585693 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmgg8\" (UniqueName: \"kubernetes.io/projected/27d03337-7374-44ab-8c95-d092c42d2355-kube-api-access-wmgg8\") pod \"cinder-600f-account-create-update-777t8\" (UID: \"27d03337-7374-44ab-8c95-d092c42d2355\") " pod="openstack/cinder-600f-account-create-update-777t8" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.585723 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z54qd\" (UniqueName: \"kubernetes.io/projected/3544eea9-736e-471c-85b4-b59aab2d5533-kube-api-access-z54qd\") pod \"heat-21d7-account-create-update-l26fg\" (UID: \"3544eea9-736e-471c-85b4-b59aab2d5533\") " pod="openstack/heat-21d7-account-create-update-l26fg" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.586904 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3544eea9-736e-471c-85b4-b59aab2d5533-operator-scripts\") pod \"heat-21d7-account-create-update-l26fg\" (UID: \"3544eea9-736e-471c-85b4-b59aab2d5533\") " pod="openstack/heat-21d7-account-create-update-l26fg" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.587733 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe99b176-e998-4cdd-9cef-32407153cc79-operator-scripts\") pod \"heat-db-create-g8c5d\" (UID: \"fe99b176-e998-4cdd-9cef-32407153cc79\") " pod="openstack/heat-db-create-g8c5d" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.624055 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-vhvl9"] Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.625149 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vhvl9" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.626033 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z54qd\" (UniqueName: \"kubernetes.io/projected/3544eea9-736e-471c-85b4-b59aab2d5533-kube-api-access-z54qd\") pod \"heat-21d7-account-create-update-l26fg\" (UID: \"3544eea9-736e-471c-85b4-b59aab2d5533\") " pod="openstack/heat-21d7-account-create-update-l26fg" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.627479 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnjmw\" (UniqueName: \"kubernetes.io/projected/fe99b176-e998-4cdd-9cef-32407153cc79-kube-api-access-lnjmw\") pod \"heat-db-create-g8c5d\" (UID: \"fe99b176-e998-4cdd-9cef-32407153cc79\") " pod="openstack/heat-db-create-g8c5d" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.635121 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-4dd6-account-create-update-bv6lj"] Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.636108 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4dd6-account-create-update-bv6lj" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.642663 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.668313 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vhvl9"] Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.675283 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-g8c5d" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.702328 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-21d7-account-create-update-l26fg" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.709327 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27d03337-7374-44ab-8c95-d092c42d2355-operator-scripts\") pod \"cinder-600f-account-create-update-777t8\" (UID: \"27d03337-7374-44ab-8c95-d092c42d2355\") " pod="openstack/cinder-600f-account-create-update-777t8" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.709437 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93af2657-6c4c-4163-aeb1-4527c3a6bf1a-combined-ca-bundle\") pod \"keystone-db-sync-6xnc6\" (UID: \"93af2657-6c4c-4163-aeb1-4527c3a6bf1a\") " pod="openstack/keystone-db-sync-6xnc6" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.709529 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfqpg\" (UniqueName: \"kubernetes.io/projected/93af2657-6c4c-4163-aeb1-4527c3a6bf1a-kube-api-access-sfqpg\") pod \"keystone-db-sync-6xnc6\" (UID: \"93af2657-6c4c-4163-aeb1-4527c3a6bf1a\") " pod="openstack/keystone-db-sync-6xnc6" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.715875 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4dd6-account-create-update-bv6lj"] Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.717795 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93af2657-6c4c-4163-aeb1-4527c3a6bf1a-config-data\") pod \"keystone-db-sync-6xnc6\" (UID: \"93af2657-6c4c-4163-aeb1-4527c3a6bf1a\") " pod="openstack/keystone-db-sync-6xnc6" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.717875 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmgg8\" (UniqueName: \"kubernetes.io/projected/27d03337-7374-44ab-8c95-d092c42d2355-kube-api-access-wmgg8\") pod \"cinder-600f-account-create-update-777t8\" (UID: \"27d03337-7374-44ab-8c95-d092c42d2355\") " pod="openstack/cinder-600f-account-create-update-777t8" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.730106 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27d03337-7374-44ab-8c95-d092c42d2355-operator-scripts\") pod \"cinder-600f-account-create-update-777t8\" (UID: \"27d03337-7374-44ab-8c95-d092c42d2355\") " pod="openstack/cinder-600f-account-create-update-777t8" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.745066 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmgg8\" (UniqueName: \"kubernetes.io/projected/27d03337-7374-44ab-8c95-d092c42d2355-kube-api-access-wmgg8\") pod \"cinder-600f-account-create-update-777t8\" (UID: \"27d03337-7374-44ab-8c95-d092c42d2355\") " pod="openstack/cinder-600f-account-create-update-777t8" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.797206 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-600f-account-create-update-777t8" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.814255 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7474-account-create-update-4lqrk" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.818969 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfqpg\" (UniqueName: \"kubernetes.io/projected/93af2657-6c4c-4163-aeb1-4527c3a6bf1a-kube-api-access-sfqpg\") pod \"keystone-db-sync-6xnc6\" (UID: \"93af2657-6c4c-4163-aeb1-4527c3a6bf1a\") " pod="openstack/keystone-db-sync-6xnc6" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.819036 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpzfq\" (UniqueName: \"kubernetes.io/projected/765f7521-d8d5-4034-b94c-e64a698c65ae-kube-api-access-fpzfq\") pod \"neutron-db-create-vhvl9\" (UID: \"765f7521-d8d5-4034-b94c-e64a698c65ae\") " pod="openstack/neutron-db-create-vhvl9" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.819069 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93af2657-6c4c-4163-aeb1-4527c3a6bf1a-config-data\") pod \"keystone-db-sync-6xnc6\" (UID: \"93af2657-6c4c-4163-aeb1-4527c3a6bf1a\") " pod="openstack/keystone-db-sync-6xnc6" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.819162 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a847cbe7-3090-4d64-9faf-ed4414d614ad-operator-scripts\") pod \"neutron-4dd6-account-create-update-bv6lj\" (UID: \"a847cbe7-3090-4d64-9faf-ed4414d614ad\") " pod="openstack/neutron-4dd6-account-create-update-bv6lj" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.819257 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdlpc\" (UniqueName: \"kubernetes.io/projected/a847cbe7-3090-4d64-9faf-ed4414d614ad-kube-api-access-gdlpc\") pod \"neutron-4dd6-account-create-update-bv6lj\" (UID: \"a847cbe7-3090-4d64-9faf-ed4414d614ad\") " pod="openstack/neutron-4dd6-account-create-update-bv6lj" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.819275 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/765f7521-d8d5-4034-b94c-e64a698c65ae-operator-scripts\") pod \"neutron-db-create-vhvl9\" (UID: \"765f7521-d8d5-4034-b94c-e64a698c65ae\") " pod="openstack/neutron-db-create-vhvl9" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.819361 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93af2657-6c4c-4163-aeb1-4527c3a6bf1a-combined-ca-bundle\") pod \"keystone-db-sync-6xnc6\" (UID: \"93af2657-6c4c-4163-aeb1-4527c3a6bf1a\") " pod="openstack/keystone-db-sync-6xnc6" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.822354 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gfs47" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.828779 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93af2657-6c4c-4163-aeb1-4527c3a6bf1a-config-data\") pod \"keystone-db-sync-6xnc6\" (UID: \"93af2657-6c4c-4163-aeb1-4527c3a6bf1a\") " pod="openstack/keystone-db-sync-6xnc6" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.832178 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93af2657-6c4c-4163-aeb1-4527c3a6bf1a-combined-ca-bundle\") pod \"keystone-db-sync-6xnc6\" (UID: \"93af2657-6c4c-4163-aeb1-4527c3a6bf1a\") " pod="openstack/keystone-db-sync-6xnc6" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.840064 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfqpg\" (UniqueName: \"kubernetes.io/projected/93af2657-6c4c-4163-aeb1-4527c3a6bf1a-kube-api-access-sfqpg\") pod \"keystone-db-sync-6xnc6\" (UID: \"93af2657-6c4c-4163-aeb1-4527c3a6bf1a\") " pod="openstack/keystone-db-sync-6xnc6" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.920524 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpzfq\" (UniqueName: \"kubernetes.io/projected/765f7521-d8d5-4034-b94c-e64a698c65ae-kube-api-access-fpzfq\") pod \"neutron-db-create-vhvl9\" (UID: \"765f7521-d8d5-4034-b94c-e64a698c65ae\") " pod="openstack/neutron-db-create-vhvl9" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.920820 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a847cbe7-3090-4d64-9faf-ed4414d614ad-operator-scripts\") pod \"neutron-4dd6-account-create-update-bv6lj\" (UID: \"a847cbe7-3090-4d64-9faf-ed4414d614ad\") " pod="openstack/neutron-4dd6-account-create-update-bv6lj" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.920894 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdlpc\" (UniqueName: \"kubernetes.io/projected/a847cbe7-3090-4d64-9faf-ed4414d614ad-kube-api-access-gdlpc\") pod \"neutron-4dd6-account-create-update-bv6lj\" (UID: \"a847cbe7-3090-4d64-9faf-ed4414d614ad\") " pod="openstack/neutron-4dd6-account-create-update-bv6lj" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.920915 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/765f7521-d8d5-4034-b94c-e64a698c65ae-operator-scripts\") pod \"neutron-db-create-vhvl9\" (UID: \"765f7521-d8d5-4034-b94c-e64a698c65ae\") " pod="openstack/neutron-db-create-vhvl9" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.921679 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a847cbe7-3090-4d64-9faf-ed4414d614ad-operator-scripts\") pod \"neutron-4dd6-account-create-update-bv6lj\" (UID: \"a847cbe7-3090-4d64-9faf-ed4414d614ad\") " pod="openstack/neutron-4dd6-account-create-update-bv6lj" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.921746 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/765f7521-d8d5-4034-b94c-e64a698c65ae-operator-scripts\") pod \"neutron-db-create-vhvl9\" (UID: \"765f7521-d8d5-4034-b94c-e64a698c65ae\") " pod="openstack/neutron-db-create-vhvl9" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.941448 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpzfq\" (UniqueName: \"kubernetes.io/projected/765f7521-d8d5-4034-b94c-e64a698c65ae-kube-api-access-fpzfq\") pod \"neutron-db-create-vhvl9\" (UID: \"765f7521-d8d5-4034-b94c-e64a698c65ae\") " pod="openstack/neutron-db-create-vhvl9" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.945092 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdlpc\" (UniqueName: \"kubernetes.io/projected/a847cbe7-3090-4d64-9faf-ed4414d614ad-kube-api-access-gdlpc\") pod \"neutron-4dd6-account-create-update-bv6lj\" (UID: \"a847cbe7-3090-4d64-9faf-ed4414d614ad\") " pod="openstack/neutron-4dd6-account-create-update-bv6lj" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.950064 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6xnc6" Feb 02 13:18:12 crc kubenswrapper[4955]: I0202 13:18:12.969755 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vhvl9" Feb 02 13:18:13 crc kubenswrapper[4955]: I0202 13:18:13.016328 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4dd6-account-create-update-bv6lj" Feb 02 13:18:13 crc kubenswrapper[4955]: I0202 13:18:13.076477 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-nzgmz"] Feb 02 13:18:13 crc kubenswrapper[4955]: W0202 13:18:13.157283 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22defd53_64bc_47b9_86e8_21563ec3a37f.slice/crio-fa32bf83463f1eb8f635f1c16c0c831f9f6d1564165cf2fe1a582ecb5290d768 WatchSource:0}: Error finding container fa32bf83463f1eb8f635f1c16c0c831f9f6d1564165cf2fe1a582ecb5290d768: Status 404 returned error can't find the container with id fa32bf83463f1eb8f635f1c16c0c831f9f6d1564165cf2fe1a582ecb5290d768 Feb 02 13:18:13 crc kubenswrapper[4955]: I0202 13:18:13.424700 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nzgmz" event={"ID":"22defd53-64bc-47b9-86e8-21563ec3a37f","Type":"ContainerStarted","Data":"fa32bf83463f1eb8f635f1c16c0c831f9f6d1564165cf2fe1a582ecb5290d768"} Feb 02 13:18:13 crc kubenswrapper[4955]: I0202 13:18:13.493210 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-g8c5d"] Feb 02 13:18:13 crc kubenswrapper[4955]: I0202 13:18:13.521135 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-21d7-account-create-update-l26fg"] Feb 02 13:18:13 crc kubenswrapper[4955]: I0202 13:18:13.738822 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-gfs47"] Feb 02 13:18:13 crc kubenswrapper[4955]: W0202 13:18:13.749154 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddea27d86_5db9_49fd_b9bf_44176e78d3d6.slice/crio-4f659a84702af451d0097b4dee87d7160b02d0c956de01398c8467fa401a8dfe WatchSource:0}: Error finding container 4f659a84702af451d0097b4dee87d7160b02d0c956de01398c8467fa401a8dfe: Status 404 returned error can't find the container with id 4f659a84702af451d0097b4dee87d7160b02d0c956de01398c8467fa401a8dfe Feb 02 13:18:13 crc kubenswrapper[4955]: I0202 13:18:13.803892 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vhvl9"] Feb 02 13:18:13 crc kubenswrapper[4955]: W0202 13:18:13.818897 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod765f7521_d8d5_4034_b94c_e64a698c65ae.slice/crio-52eeb9763c6779f60dfd0f20dd4348b0d05558c19a62196fdf038723d6256b73 WatchSource:0}: Error finding container 52eeb9763c6779f60dfd0f20dd4348b0d05558c19a62196fdf038723d6256b73: Status 404 returned error can't find the container with id 52eeb9763c6779f60dfd0f20dd4348b0d05558c19a62196fdf038723d6256b73 Feb 02 13:18:13 crc kubenswrapper[4955]: I0202 13:18:13.827994 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-600f-account-create-update-777t8"] Feb 02 13:18:13 crc kubenswrapper[4955]: I0202 13:18:13.896626 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7474-account-create-update-4lqrk"] Feb 02 13:18:13 crc kubenswrapper[4955]: W0202 13:18:13.919004 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcda69844_00d2_4981_bfbd_1d4ed05274d1.slice/crio-feb15daf31b63416f321e258a869325109768992d1fd36cc0dccf2d9065f781a WatchSource:0}: Error finding container feb15daf31b63416f321e258a869325109768992d1fd36cc0dccf2d9065f781a: Status 404 returned error can't find the container with id feb15daf31b63416f321e258a869325109768992d1fd36cc0dccf2d9065f781a Feb 02 13:18:14 crc kubenswrapper[4955]: I0202 13:18:14.115401 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4dd6-account-create-update-bv6lj"] Feb 02 13:18:14 crc kubenswrapper[4955]: I0202 13:18:14.154826 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6xnc6"] Feb 02 13:18:14 crc kubenswrapper[4955]: W0202 13:18:14.183757 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93af2657_6c4c_4163_aeb1_4527c3a6bf1a.slice/crio-3add40f81462d0f20f600308603363bc7c648cee0010449d28f9c7c974840acd WatchSource:0}: Error finding container 3add40f81462d0f20f600308603363bc7c648cee0010449d28f9c7c974840acd: Status 404 returned error can't find the container with id 3add40f81462d0f20f600308603363bc7c648cee0010449d28f9c7c974840acd Feb 02 13:18:14 crc kubenswrapper[4955]: I0202 13:18:14.432140 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7474-account-create-update-4lqrk" event={"ID":"cda69844-00d2-4981-bfbd-1d4ed05274d1","Type":"ContainerStarted","Data":"feb15daf31b63416f321e258a869325109768992d1fd36cc0dccf2d9065f781a"} Feb 02 13:18:14 crc kubenswrapper[4955]: I0202 13:18:14.433503 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nzgmz" event={"ID":"22defd53-64bc-47b9-86e8-21563ec3a37f","Type":"ContainerStarted","Data":"21c3d26a19b7104e2eb1e527113b02315c07975d721a6723a7abc9daca8770ca"} Feb 02 13:18:14 crc kubenswrapper[4955]: I0202 13:18:14.435307 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6xnc6" event={"ID":"93af2657-6c4c-4163-aeb1-4527c3a6bf1a","Type":"ContainerStarted","Data":"3add40f81462d0f20f600308603363bc7c648cee0010449d28f9c7c974840acd"} Feb 02 13:18:14 crc kubenswrapper[4955]: I0202 13:18:14.436959 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpct9" event={"ID":"5c399cd3-81fb-4625-8d18-ce2bb6c0b72e","Type":"ContainerStarted","Data":"b01e8940e3675f606bc437b031aee89c1c8992b77db4002151ebe698c9a0ab40"} Feb 02 13:18:14 crc kubenswrapper[4955]: I0202 13:18:14.437938 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-gfs47" event={"ID":"dea27d86-5db9-49fd-b9bf-44176e78d3d6","Type":"ContainerStarted","Data":"d8ee3535d77f70e99eb54c38efbe3b431a8e68bab5d349c9c55a0df0bda3768d"} Feb 02 13:18:14 crc kubenswrapper[4955]: I0202 13:18:14.437969 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-gfs47" event={"ID":"dea27d86-5db9-49fd-b9bf-44176e78d3d6","Type":"ContainerStarted","Data":"4f659a84702af451d0097b4dee87d7160b02d0c956de01398c8467fa401a8dfe"} Feb 02 13:18:14 crc kubenswrapper[4955]: I0202 13:18:14.446602 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-g8c5d" event={"ID":"fe99b176-e998-4cdd-9cef-32407153cc79","Type":"ContainerStarted","Data":"46cf4d138b8cf856a6ce96e55d09ee446998137334c082356c24caac5d427089"} Feb 02 13:18:14 crc kubenswrapper[4955]: I0202 13:18:14.446669 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-g8c5d" event={"ID":"fe99b176-e998-4cdd-9cef-32407153cc79","Type":"ContainerStarted","Data":"44fe082232c91e593e5d4274000b9ef871aae48ad59ac5dacc39211bb4637fc2"} Feb 02 13:18:14 crc kubenswrapper[4955]: I0202 13:18:14.447994 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4dd6-account-create-update-bv6lj" event={"ID":"a847cbe7-3090-4d64-9faf-ed4414d614ad","Type":"ContainerStarted","Data":"aae8ea5571ef1c32bb03179e43461486517338a80c54996e49c773a9418eacf4"} Feb 02 13:18:14 crc kubenswrapper[4955]: I0202 13:18:14.449568 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-21d7-account-create-update-l26fg" event={"ID":"3544eea9-736e-471c-85b4-b59aab2d5533","Type":"ContainerStarted","Data":"2de70f227665485fb9e717a948f6cb1fdf90ab88f09d2a9976259074ff3e57d9"} Feb 02 13:18:14 crc kubenswrapper[4955]: I0202 13:18:14.449617 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-21d7-account-create-update-l26fg" event={"ID":"3544eea9-736e-471c-85b4-b59aab2d5533","Type":"ContainerStarted","Data":"ade63427491a4c513b97a8dc892b7fa8ae046f8d4f19aeb3ca9359fddd8e9356"} Feb 02 13:18:14 crc kubenswrapper[4955]: I0202 13:18:14.450863 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vhvl9" event={"ID":"765f7521-d8d5-4034-b94c-e64a698c65ae","Type":"ContainerStarted","Data":"52eeb9763c6779f60dfd0f20dd4348b0d05558c19a62196fdf038723d6256b73"} Feb 02 13:18:14 crc kubenswrapper[4955]: I0202 13:18:14.452185 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-600f-account-create-update-777t8" event={"ID":"27d03337-7374-44ab-8c95-d092c42d2355","Type":"ContainerStarted","Data":"095950090d95262ee4ace23471e9f8492b59fd2ab81bd7e885b4a25b3aa00b90"} Feb 02 13:18:14 crc kubenswrapper[4955]: I0202 13:18:14.461368 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-nzgmz" podStartSLOduration=2.461349431 podStartE2EDuration="2.461349431s" podCreationTimestamp="2026-02-02 13:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:18:14.457754124 +0000 UTC m=+945.370090584" watchObservedRunningTime="2026-02-02 13:18:14.461349431 +0000 UTC m=+945.373685881" Feb 02 13:18:15 crc kubenswrapper[4955]: I0202 13:18:15.462074 4955 generic.go:334] "Generic (PLEG): container finished" podID="5c399cd3-81fb-4625-8d18-ce2bb6c0b72e" containerID="b01e8940e3675f606bc437b031aee89c1c8992b77db4002151ebe698c9a0ab40" exitCode=0 Feb 02 13:18:15 crc kubenswrapper[4955]: I0202 13:18:15.462253 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpct9" event={"ID":"5c399cd3-81fb-4625-8d18-ce2bb6c0b72e","Type":"ContainerDied","Data":"b01e8940e3675f606bc437b031aee89c1c8992b77db4002151ebe698c9a0ab40"} Feb 02 13:18:15 crc kubenswrapper[4955]: I0202 13:18:15.467851 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-600f-account-create-update-777t8" event={"ID":"27d03337-7374-44ab-8c95-d092c42d2355","Type":"ContainerStarted","Data":"40b43eb3ce14e9a9b154c05904ab9fcc6e1e8c89dc11591de887e31181cfff9f"} Feb 02 13:18:15 crc kubenswrapper[4955]: I0202 13:18:15.470277 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4dd6-account-create-update-bv6lj" event={"ID":"a847cbe7-3090-4d64-9faf-ed4414d614ad","Type":"ContainerStarted","Data":"dcc095c9764baf9dc5b5dad2c9e7a3870622884b6b47c5e6b32fc574f79aecb6"} Feb 02 13:18:15 crc kubenswrapper[4955]: I0202 13:18:15.472024 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7474-account-create-update-4lqrk" event={"ID":"cda69844-00d2-4981-bfbd-1d4ed05274d1","Type":"ContainerStarted","Data":"ece2a490f5a6a52a96458ef7bad0fc2e3a57119e26b83a997ac70eadb4b94287"} Feb 02 13:18:15 crc kubenswrapper[4955]: I0202 13:18:15.473853 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vhvl9" event={"ID":"765f7521-d8d5-4034-b94c-e64a698c65ae","Type":"ContainerStarted","Data":"9431690c224249cc93aefe1fba0be8d28d2321d6020f9b450e3daf79f7f98943"} Feb 02 13:18:15 crc kubenswrapper[4955]: I0202 13:18:15.508394 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-gfs47" podStartSLOduration=3.508373997 podStartE2EDuration="3.508373997s" podCreationTimestamp="2026-02-02 13:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:18:15.49441565 +0000 UTC m=+946.406752100" watchObservedRunningTime="2026-02-02 13:18:15.508373997 +0000 UTC m=+946.420710457" Feb 02 13:18:15 crc kubenswrapper[4955]: I0202 13:18:15.527835 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-7474-account-create-update-4lqrk" podStartSLOduration=3.527810857 podStartE2EDuration="3.527810857s" podCreationTimestamp="2026-02-02 13:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:18:15.515931651 +0000 UTC m=+946.428268101" watchObservedRunningTime="2026-02-02 13:18:15.527810857 +0000 UTC m=+946.440147317" Feb 02 13:18:15 crc kubenswrapper[4955]: I0202 13:18:15.537341 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-g8c5d" podStartSLOduration=3.537323967 podStartE2EDuration="3.537323967s" podCreationTimestamp="2026-02-02 13:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:18:15.534832777 +0000 UTC m=+946.447169227" watchObservedRunningTime="2026-02-02 13:18:15.537323967 +0000 UTC m=+946.449660427" Feb 02 13:18:15 crc kubenswrapper[4955]: I0202 13:18:15.556326 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-4dd6-account-create-update-bv6lj" podStartSLOduration=3.556308555 podStartE2EDuration="3.556308555s" podCreationTimestamp="2026-02-02 13:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:18:15.549196293 +0000 UTC m=+946.461532743" watchObservedRunningTime="2026-02-02 13:18:15.556308555 +0000 UTC m=+946.468645005" Feb 02 13:18:15 crc kubenswrapper[4955]: I0202 13:18:15.575119 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-600f-account-create-update-777t8" podStartSLOduration=3.575104559 podStartE2EDuration="3.575104559s" podCreationTimestamp="2026-02-02 13:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:18:15.568207153 +0000 UTC m=+946.480543603" watchObservedRunningTime="2026-02-02 13:18:15.575104559 +0000 UTC m=+946.487441009" Feb 02 13:18:15 crc kubenswrapper[4955]: I0202 13:18:15.588283 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-vhvl9" podStartSLOduration=3.588260177 podStartE2EDuration="3.588260177s" podCreationTimestamp="2026-02-02 13:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:18:15.580675274 +0000 UTC m=+946.493011734" watchObservedRunningTime="2026-02-02 13:18:15.588260177 +0000 UTC m=+946.500596627" Feb 02 13:18:15 crc kubenswrapper[4955]: I0202 13:18:15.599880 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-21d7-account-create-update-l26fg" podStartSLOduration=3.599862378 podStartE2EDuration="3.599862378s" podCreationTimestamp="2026-02-02 13:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:18:15.593257918 +0000 UTC m=+946.505594368" watchObservedRunningTime="2026-02-02 13:18:15.599862378 +0000 UTC m=+946.512198828" Feb 02 13:18:18 crc kubenswrapper[4955]: I0202 13:18:18.499083 4955 generic.go:334] "Generic (PLEG): container finished" podID="22defd53-64bc-47b9-86e8-21563ec3a37f" containerID="21c3d26a19b7104e2eb1e527113b02315c07975d721a6723a7abc9daca8770ca" exitCode=0 Feb 02 13:18:18 crc kubenswrapper[4955]: I0202 13:18:18.499211 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nzgmz" event={"ID":"22defd53-64bc-47b9-86e8-21563ec3a37f","Type":"ContainerDied","Data":"21c3d26a19b7104e2eb1e527113b02315c07975d721a6723a7abc9daca8770ca"} Feb 02 13:18:18 crc kubenswrapper[4955]: I0202 13:18:18.502610 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpct9" event={"ID":"5c399cd3-81fb-4625-8d18-ce2bb6c0b72e","Type":"ContainerStarted","Data":"b6e3136bd7ac69f17d571d695108858b9c71721ea0f036c8d18b819d73072a14"} Feb 02 13:18:18 crc kubenswrapper[4955]: I0202 13:18:18.508850 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec1a6503-248d-4f72-a3ab-e23df2ca163d","Type":"ContainerStarted","Data":"3d1d66728b1157ae29a2a88a026f43039b9f6e01046a3e8d79593fc764b3c4cb"} Feb 02 13:18:18 crc kubenswrapper[4955]: I0202 13:18:18.509197 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec1a6503-248d-4f72-a3ab-e23df2ca163d","Type":"ContainerStarted","Data":"8732cb36b29e774d8b501f3c6d292d00ee50b6a3cc8d7bb0017479661e5d5d20"} Feb 02 13:18:18 crc kubenswrapper[4955]: I0202 13:18:18.514735 4955 generic.go:334] "Generic (PLEG): container finished" podID="dea27d86-5db9-49fd-b9bf-44176e78d3d6" containerID="d8ee3535d77f70e99eb54c38efbe3b431a8e68bab5d349c9c55a0df0bda3768d" exitCode=0 Feb 02 13:18:18 crc kubenswrapper[4955]: I0202 13:18:18.514802 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-gfs47" event={"ID":"dea27d86-5db9-49fd-b9bf-44176e78d3d6","Type":"ContainerDied","Data":"d8ee3535d77f70e99eb54c38efbe3b431a8e68bab5d349c9c55a0df0bda3768d"} Feb 02 13:18:18 crc kubenswrapper[4955]: I0202 13:18:18.537126 4955 generic.go:334] "Generic (PLEG): container finished" podID="fe99b176-e998-4cdd-9cef-32407153cc79" containerID="46cf4d138b8cf856a6ce96e55d09ee446998137334c082356c24caac5d427089" exitCode=0 Feb 02 13:18:18 crc kubenswrapper[4955]: I0202 13:18:18.537213 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-g8c5d" event={"ID":"fe99b176-e998-4cdd-9cef-32407153cc79","Type":"ContainerDied","Data":"46cf4d138b8cf856a6ce96e55d09ee446998137334c082356c24caac5d427089"} Feb 02 13:18:18 crc kubenswrapper[4955]: I0202 13:18:18.584797 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hpct9" podStartSLOduration=2.745621802 podStartE2EDuration="12.584775925s" podCreationTimestamp="2026-02-02 13:18:06 +0000 UTC" firstStartedPulling="2026-02-02 13:18:08.214773202 +0000 UTC m=+939.127109652" lastFinishedPulling="2026-02-02 13:18:18.053927335 +0000 UTC m=+948.966263775" observedRunningTime="2026-02-02 13:18:18.555096838 +0000 UTC m=+949.467433298" watchObservedRunningTime="2026-02-02 13:18:18.584775925 +0000 UTC m=+949.497112375" Feb 02 13:18:19 crc kubenswrapper[4955]: I0202 13:18:19.551510 4955 generic.go:334] "Generic (PLEG): container finished" podID="765f7521-d8d5-4034-b94c-e64a698c65ae" containerID="9431690c224249cc93aefe1fba0be8d28d2321d6020f9b450e3daf79f7f98943" exitCode=0 Feb 02 13:18:19 crc kubenswrapper[4955]: I0202 13:18:19.551651 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vhvl9" event={"ID":"765f7521-d8d5-4034-b94c-e64a698c65ae","Type":"ContainerDied","Data":"9431690c224249cc93aefe1fba0be8d28d2321d6020f9b450e3daf79f7f98943"} Feb 02 13:18:19 crc kubenswrapper[4955]: I0202 13:18:19.558446 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec1a6503-248d-4f72-a3ab-e23df2ca163d","Type":"ContainerStarted","Data":"d7dc5e5b5ba59690667ad1feaa5a5d93c90e30747216221c03756e56130a8a6a"} Feb 02 13:18:20 crc kubenswrapper[4955]: I0202 13:18:20.575658 4955 generic.go:334] "Generic (PLEG): container finished" podID="3544eea9-736e-471c-85b4-b59aab2d5533" containerID="2de70f227665485fb9e717a948f6cb1fdf90ab88f09d2a9976259074ff3e57d9" exitCode=0 Feb 02 13:18:20 crc kubenswrapper[4955]: I0202 13:18:20.575736 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-21d7-account-create-update-l26fg" event={"ID":"3544eea9-736e-471c-85b4-b59aab2d5533","Type":"ContainerDied","Data":"2de70f227665485fb9e717a948f6cb1fdf90ab88f09d2a9976259074ff3e57d9"} Feb 02 13:18:20 crc kubenswrapper[4955]: I0202 13:18:20.586939 4955 generic.go:334] "Generic (PLEG): container finished" podID="27d03337-7374-44ab-8c95-d092c42d2355" containerID="40b43eb3ce14e9a9b154c05904ab9fcc6e1e8c89dc11591de887e31181cfff9f" exitCode=0 Feb 02 13:18:20 crc kubenswrapper[4955]: I0202 13:18:20.587007 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-600f-account-create-update-777t8" event={"ID":"27d03337-7374-44ab-8c95-d092c42d2355","Type":"ContainerDied","Data":"40b43eb3ce14e9a9b154c05904ab9fcc6e1e8c89dc11591de887e31181cfff9f"} Feb 02 13:18:20 crc kubenswrapper[4955]: I0202 13:18:20.595118 4955 generic.go:334] "Generic (PLEG): container finished" podID="a847cbe7-3090-4d64-9faf-ed4414d614ad" containerID="dcc095c9764baf9dc5b5dad2c9e7a3870622884b6b47c5e6b32fc574f79aecb6" exitCode=0 Feb 02 13:18:20 crc kubenswrapper[4955]: I0202 13:18:20.595193 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4dd6-account-create-update-bv6lj" event={"ID":"a847cbe7-3090-4d64-9faf-ed4414d614ad","Type":"ContainerDied","Data":"dcc095c9764baf9dc5b5dad2c9e7a3870622884b6b47c5e6b32fc574f79aecb6"} Feb 02 13:18:20 crc kubenswrapper[4955]: I0202 13:18:20.597498 4955 generic.go:334] "Generic (PLEG): container finished" podID="cda69844-00d2-4981-bfbd-1d4ed05274d1" containerID="ece2a490f5a6a52a96458ef7bad0fc2e3a57119e26b83a997ac70eadb4b94287" exitCode=0 Feb 02 13:18:20 crc kubenswrapper[4955]: I0202 13:18:20.597725 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7474-account-create-update-4lqrk" event={"ID":"cda69844-00d2-4981-bfbd-1d4ed05274d1","Type":"ContainerDied","Data":"ece2a490f5a6a52a96458ef7bad0fc2e3a57119e26b83a997ac70eadb4b94287"} Feb 02 13:18:22 crc kubenswrapper[4955]: I0202 13:18:22.068839 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.152438 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vhvl9" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.174107 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-g8c5d" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.185966 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gfs47" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.224432 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-600f-account-create-update-777t8" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.225469 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nzgmz" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.227077 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnjmw\" (UniqueName: \"kubernetes.io/projected/fe99b176-e998-4cdd-9cef-32407153cc79-kube-api-access-lnjmw\") pod \"fe99b176-e998-4cdd-9cef-32407153cc79\" (UID: \"fe99b176-e998-4cdd-9cef-32407153cc79\") " Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.227116 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/765f7521-d8d5-4034-b94c-e64a698c65ae-operator-scripts\") pod \"765f7521-d8d5-4034-b94c-e64a698c65ae\" (UID: \"765f7521-d8d5-4034-b94c-e64a698c65ae\") " Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.227179 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpzfq\" (UniqueName: \"kubernetes.io/projected/765f7521-d8d5-4034-b94c-e64a698c65ae-kube-api-access-fpzfq\") pod \"765f7521-d8d5-4034-b94c-e64a698c65ae\" (UID: \"765f7521-d8d5-4034-b94c-e64a698c65ae\") " Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.227263 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe99b176-e998-4cdd-9cef-32407153cc79-operator-scripts\") pod \"fe99b176-e998-4cdd-9cef-32407153cc79\" (UID: \"fe99b176-e998-4cdd-9cef-32407153cc79\") " Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.227315 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dea27d86-5db9-49fd-b9bf-44176e78d3d6-operator-scripts\") pod \"dea27d86-5db9-49fd-b9bf-44176e78d3d6\" (UID: \"dea27d86-5db9-49fd-b9bf-44176e78d3d6\") " Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.227348 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-654ll\" (UniqueName: \"kubernetes.io/projected/dea27d86-5db9-49fd-b9bf-44176e78d3d6-kube-api-access-654ll\") pod \"dea27d86-5db9-49fd-b9bf-44176e78d3d6\" (UID: \"dea27d86-5db9-49fd-b9bf-44176e78d3d6\") " Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.234941 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe99b176-e998-4cdd-9cef-32407153cc79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe99b176-e998-4cdd-9cef-32407153cc79" (UID: "fe99b176-e998-4cdd-9cef-32407153cc79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.235368 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dea27d86-5db9-49fd-b9bf-44176e78d3d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dea27d86-5db9-49fd-b9bf-44176e78d3d6" (UID: "dea27d86-5db9-49fd-b9bf-44176e78d3d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.235917 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/765f7521-d8d5-4034-b94c-e64a698c65ae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "765f7521-d8d5-4034-b94c-e64a698c65ae" (UID: "765f7521-d8d5-4034-b94c-e64a698c65ae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.241636 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4dd6-account-create-update-bv6lj" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.244903 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dea27d86-5db9-49fd-b9bf-44176e78d3d6-kube-api-access-654ll" (OuterVolumeSpecName: "kube-api-access-654ll") pod "dea27d86-5db9-49fd-b9bf-44176e78d3d6" (UID: "dea27d86-5db9-49fd-b9bf-44176e78d3d6"). InnerVolumeSpecName "kube-api-access-654ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.256927 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/765f7521-d8d5-4034-b94c-e64a698c65ae-kube-api-access-fpzfq" (OuterVolumeSpecName: "kube-api-access-fpzfq") pod "765f7521-d8d5-4034-b94c-e64a698c65ae" (UID: "765f7521-d8d5-4034-b94c-e64a698c65ae"). InnerVolumeSpecName "kube-api-access-fpzfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.278622 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe99b176-e998-4cdd-9cef-32407153cc79-kube-api-access-lnjmw" (OuterVolumeSpecName: "kube-api-access-lnjmw") pod "fe99b176-e998-4cdd-9cef-32407153cc79" (UID: "fe99b176-e998-4cdd-9cef-32407153cc79"). InnerVolumeSpecName "kube-api-access-lnjmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.328381 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7474-account-create-update-4lqrk" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.329207 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z75h\" (UniqueName: \"kubernetes.io/projected/22defd53-64bc-47b9-86e8-21563ec3a37f-kube-api-access-8z75h\") pod \"22defd53-64bc-47b9-86e8-21563ec3a37f\" (UID: \"22defd53-64bc-47b9-86e8-21563ec3a37f\") " Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.329244 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmgg8\" (UniqueName: \"kubernetes.io/projected/27d03337-7374-44ab-8c95-d092c42d2355-kube-api-access-wmgg8\") pod \"27d03337-7374-44ab-8c95-d092c42d2355\" (UID: \"27d03337-7374-44ab-8c95-d092c42d2355\") " Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.329262 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdlpc\" (UniqueName: \"kubernetes.io/projected/a847cbe7-3090-4d64-9faf-ed4414d614ad-kube-api-access-gdlpc\") pod \"a847cbe7-3090-4d64-9faf-ed4414d614ad\" (UID: \"a847cbe7-3090-4d64-9faf-ed4414d614ad\") " Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.329297 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a847cbe7-3090-4d64-9faf-ed4414d614ad-operator-scripts\") pod \"a847cbe7-3090-4d64-9faf-ed4414d614ad\" (UID: \"a847cbe7-3090-4d64-9faf-ed4414d614ad\") " Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.329370 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22defd53-64bc-47b9-86e8-21563ec3a37f-operator-scripts\") pod \"22defd53-64bc-47b9-86e8-21563ec3a37f\" (UID: \"22defd53-64bc-47b9-86e8-21563ec3a37f\") " Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.329406 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27d03337-7374-44ab-8c95-d092c42d2355-operator-scripts\") pod \"27d03337-7374-44ab-8c95-d092c42d2355\" (UID: \"27d03337-7374-44ab-8c95-d092c42d2355\") " Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.329702 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/765f7521-d8d5-4034-b94c-e64a698c65ae-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.329720 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnjmw\" (UniqueName: \"kubernetes.io/projected/fe99b176-e998-4cdd-9cef-32407153cc79-kube-api-access-lnjmw\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.329729 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpzfq\" (UniqueName: \"kubernetes.io/projected/765f7521-d8d5-4034-b94c-e64a698c65ae-kube-api-access-fpzfq\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.329738 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe99b176-e998-4cdd-9cef-32407153cc79-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.329747 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dea27d86-5db9-49fd-b9bf-44176e78d3d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.329756 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-654ll\" (UniqueName: \"kubernetes.io/projected/dea27d86-5db9-49fd-b9bf-44176e78d3d6-kube-api-access-654ll\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.330151 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d03337-7374-44ab-8c95-d092c42d2355-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27d03337-7374-44ab-8c95-d092c42d2355" (UID: "27d03337-7374-44ab-8c95-d092c42d2355"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.331154 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22defd53-64bc-47b9-86e8-21563ec3a37f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22defd53-64bc-47b9-86e8-21563ec3a37f" (UID: "22defd53-64bc-47b9-86e8-21563ec3a37f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.331656 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a847cbe7-3090-4d64-9faf-ed4414d614ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a847cbe7-3090-4d64-9faf-ed4414d614ad" (UID: "a847cbe7-3090-4d64-9faf-ed4414d614ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.342853 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a847cbe7-3090-4d64-9faf-ed4414d614ad-kube-api-access-gdlpc" (OuterVolumeSpecName: "kube-api-access-gdlpc") pod "a847cbe7-3090-4d64-9faf-ed4414d614ad" (UID: "a847cbe7-3090-4d64-9faf-ed4414d614ad"). InnerVolumeSpecName "kube-api-access-gdlpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.342862 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22defd53-64bc-47b9-86e8-21563ec3a37f-kube-api-access-8z75h" (OuterVolumeSpecName: "kube-api-access-8z75h") pod "22defd53-64bc-47b9-86e8-21563ec3a37f" (UID: "22defd53-64bc-47b9-86e8-21563ec3a37f"). InnerVolumeSpecName "kube-api-access-8z75h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.342929 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27d03337-7374-44ab-8c95-d092c42d2355-kube-api-access-wmgg8" (OuterVolumeSpecName: "kube-api-access-wmgg8") pod "27d03337-7374-44ab-8c95-d092c42d2355" (UID: "27d03337-7374-44ab-8c95-d092c42d2355"). InnerVolumeSpecName "kube-api-access-wmgg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.357105 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-21d7-account-create-update-l26fg" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.431637 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3544eea9-736e-471c-85b4-b59aab2d5533-operator-scripts\") pod \"3544eea9-736e-471c-85b4-b59aab2d5533\" (UID: \"3544eea9-736e-471c-85b4-b59aab2d5533\") " Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.431726 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrt26\" (UniqueName: \"kubernetes.io/projected/cda69844-00d2-4981-bfbd-1d4ed05274d1-kube-api-access-wrt26\") pod \"cda69844-00d2-4981-bfbd-1d4ed05274d1\" (UID: \"cda69844-00d2-4981-bfbd-1d4ed05274d1\") " Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.431788 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cda69844-00d2-4981-bfbd-1d4ed05274d1-operator-scripts\") pod \"cda69844-00d2-4981-bfbd-1d4ed05274d1\" (UID: \"cda69844-00d2-4981-bfbd-1d4ed05274d1\") " Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.431883 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z54qd\" (UniqueName: \"kubernetes.io/projected/3544eea9-736e-471c-85b4-b59aab2d5533-kube-api-access-z54qd\") pod \"3544eea9-736e-471c-85b4-b59aab2d5533\" (UID: \"3544eea9-736e-471c-85b4-b59aab2d5533\") " Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.432097 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3544eea9-736e-471c-85b4-b59aab2d5533-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3544eea9-736e-471c-85b4-b59aab2d5533" (UID: "3544eea9-736e-471c-85b4-b59aab2d5533"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.432534 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cda69844-00d2-4981-bfbd-1d4ed05274d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cda69844-00d2-4981-bfbd-1d4ed05274d1" (UID: "cda69844-00d2-4981-bfbd-1d4ed05274d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.433185 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22defd53-64bc-47b9-86e8-21563ec3a37f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.433208 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27d03337-7374-44ab-8c95-d092c42d2355-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.433243 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3544eea9-736e-471c-85b4-b59aab2d5533-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.433255 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cda69844-00d2-4981-bfbd-1d4ed05274d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.433286 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z75h\" (UniqueName: \"kubernetes.io/projected/22defd53-64bc-47b9-86e8-21563ec3a37f-kube-api-access-8z75h\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.433318 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmgg8\" (UniqueName: \"kubernetes.io/projected/27d03337-7374-44ab-8c95-d092c42d2355-kube-api-access-wmgg8\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.433330 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdlpc\" (UniqueName: \"kubernetes.io/projected/a847cbe7-3090-4d64-9faf-ed4414d614ad-kube-api-access-gdlpc\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.433341 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a847cbe7-3090-4d64-9faf-ed4414d614ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.435838 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3544eea9-736e-471c-85b4-b59aab2d5533-kube-api-access-z54qd" (OuterVolumeSpecName: "kube-api-access-z54qd") pod "3544eea9-736e-471c-85b4-b59aab2d5533" (UID: "3544eea9-736e-471c-85b4-b59aab2d5533"). InnerVolumeSpecName "kube-api-access-z54qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.436013 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cda69844-00d2-4981-bfbd-1d4ed05274d1-kube-api-access-wrt26" (OuterVolumeSpecName: "kube-api-access-wrt26") pod "cda69844-00d2-4981-bfbd-1d4ed05274d1" (UID: "cda69844-00d2-4981-bfbd-1d4ed05274d1"). InnerVolumeSpecName "kube-api-access-wrt26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.535784 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z54qd\" (UniqueName: \"kubernetes.io/projected/3544eea9-736e-471c-85b4-b59aab2d5533-kube-api-access-z54qd\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.535830 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrt26\" (UniqueName: \"kubernetes.io/projected/cda69844-00d2-4981-bfbd-1d4ed05274d1-kube-api-access-wrt26\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.621974 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-g8c5d" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.621987 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-g8c5d" event={"ID":"fe99b176-e998-4cdd-9cef-32407153cc79","Type":"ContainerDied","Data":"44fe082232c91e593e5d4274000b9ef871aae48ad59ac5dacc39211bb4637fc2"} Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.622054 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44fe082232c91e593e5d4274000b9ef871aae48ad59ac5dacc39211bb4637fc2" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.623454 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4dd6-account-create-update-bv6lj" event={"ID":"a847cbe7-3090-4d64-9faf-ed4414d614ad","Type":"ContainerDied","Data":"aae8ea5571ef1c32bb03179e43461486517338a80c54996e49c773a9418eacf4"} Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.623475 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aae8ea5571ef1c32bb03179e43461486517338a80c54996e49c773a9418eacf4" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.623499 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4dd6-account-create-update-bv6lj" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.624656 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nzgmz" event={"ID":"22defd53-64bc-47b9-86e8-21563ec3a37f","Type":"ContainerDied","Data":"fa32bf83463f1eb8f635f1c16c0c831f9f6d1564165cf2fe1a582ecb5290d768"} Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.624674 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa32bf83463f1eb8f635f1c16c0c831f9f6d1564165cf2fe1a582ecb5290d768" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.624723 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nzgmz" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.638470 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-21d7-account-create-update-l26fg" event={"ID":"3544eea9-736e-471c-85b4-b59aab2d5533","Type":"ContainerDied","Data":"ade63427491a4c513b97a8dc892b7fa8ae046f8d4f19aeb3ca9359fddd8e9356"} Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.638503 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ade63427491a4c513b97a8dc892b7fa8ae046f8d4f19aeb3ca9359fddd8e9356" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.638595 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-21d7-account-create-update-l26fg" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.642675 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6xnc6" event={"ID":"93af2657-6c4c-4163-aeb1-4527c3a6bf1a","Type":"ContainerStarted","Data":"914d456e1d094fad591bfeace5376b27ca7d57d2845570178729d1674a9b069c"} Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.647662 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vhvl9" event={"ID":"765f7521-d8d5-4034-b94c-e64a698c65ae","Type":"ContainerDied","Data":"52eeb9763c6779f60dfd0f20dd4348b0d05558c19a62196fdf038723d6256b73"} Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.647741 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52eeb9763c6779f60dfd0f20dd4348b0d05558c19a62196fdf038723d6256b73" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.647739 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vhvl9" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.651482 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-600f-account-create-update-777t8" event={"ID":"27d03337-7374-44ab-8c95-d092c42d2355","Type":"ContainerDied","Data":"095950090d95262ee4ace23471e9f8492b59fd2ab81bd7e885b4a25b3aa00b90"} Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.651507 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="095950090d95262ee4ace23471e9f8492b59fd2ab81bd7e885b4a25b3aa00b90" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.651547 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-600f-account-create-update-777t8" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.660107 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec1a6503-248d-4f72-a3ab-e23df2ca163d","Type":"ContainerStarted","Data":"4eea4421e12ed1d8817c83c4b188554ee2436caf145422476114e2d23830452b"} Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.662185 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-gfs47" event={"ID":"dea27d86-5db9-49fd-b9bf-44176e78d3d6","Type":"ContainerDied","Data":"4f659a84702af451d0097b4dee87d7160b02d0c956de01398c8467fa401a8dfe"} Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.662236 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f659a84702af451d0097b4dee87d7160b02d0c956de01398c8467fa401a8dfe" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.662284 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gfs47" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.665067 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7474-account-create-update-4lqrk" event={"ID":"cda69844-00d2-4981-bfbd-1d4ed05274d1","Type":"ContainerDied","Data":"feb15daf31b63416f321e258a869325109768992d1fd36cc0dccf2d9065f781a"} Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.665102 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="feb15daf31b63416f321e258a869325109768992d1fd36cc0dccf2d9065f781a" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.665148 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7474-account-create-update-4lqrk" Feb 02 13:18:23 crc kubenswrapper[4955]: I0202 13:18:23.691117 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-6xnc6" podStartSLOduration=2.83354409 podStartE2EDuration="11.691099337s" podCreationTimestamp="2026-02-02 13:18:12 +0000 UTC" firstStartedPulling="2026-02-02 13:18:14.189933127 +0000 UTC m=+945.102269577" lastFinishedPulling="2026-02-02 13:18:23.047488374 +0000 UTC m=+953.959824824" observedRunningTime="2026-02-02 13:18:23.677242583 +0000 UTC m=+954.589579023" watchObservedRunningTime="2026-02-02 13:18:23.691099337 +0000 UTC m=+954.603435777" Feb 02 13:18:25 crc kubenswrapper[4955]: I0202 13:18:25.689787 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec1a6503-248d-4f72-a3ab-e23df2ca163d","Type":"ContainerStarted","Data":"5ba79d4e6b95ba328356085eb2f2d25f36ecec72f038645bcf30a2d698f3d273"} Feb 02 13:18:25 crc kubenswrapper[4955]: I0202 13:18:25.690353 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec1a6503-248d-4f72-a3ab-e23df2ca163d","Type":"ContainerStarted","Data":"99b59b485a31b0f6ad0aeafa3c22af07c7765b3c562be3c1b85874536000486a"} Feb 02 13:18:25 crc kubenswrapper[4955]: I0202 13:18:25.690365 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec1a6503-248d-4f72-a3ab-e23df2ca163d","Type":"ContainerStarted","Data":"623c3190e884076a01c9fca6fc21d71171d00751e76bdcd0f08fff2bcad8edb2"} Feb 02 13:18:26 crc kubenswrapper[4955]: I0202 13:18:26.444160 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hpct9" Feb 02 13:18:26 crc kubenswrapper[4955]: I0202 13:18:26.444594 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hpct9" Feb 02 13:18:26 crc kubenswrapper[4955]: I0202 13:18:26.702833 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec1a6503-248d-4f72-a3ab-e23df2ca163d","Type":"ContainerStarted","Data":"0c66a38310fba8080437dfa15ccc325c306db96c847e173e4fc0b83568890a88"} Feb 02 13:18:26 crc kubenswrapper[4955]: I0202 13:18:26.702901 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec1a6503-248d-4f72-a3ab-e23df2ca163d","Type":"ContainerStarted","Data":"b495261bf622cbc9d6f3d29eee7b06bb14b97758bc7327b4c2b83bd3530ffc9b"} Feb 02 13:18:26 crc kubenswrapper[4955]: I0202 13:18:26.702911 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec1a6503-248d-4f72-a3ab-e23df2ca163d","Type":"ContainerStarted","Data":"200d143d9156e9fe4abd473cc09485e0b80874f44cb3f032fe266df2999d9ae7"} Feb 02 13:18:26 crc kubenswrapper[4955]: I0202 13:18:26.702918 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec1a6503-248d-4f72-a3ab-e23df2ca163d","Type":"ContainerStarted","Data":"fa110a6c3831229809651635f3abe0df3550b23322f9e5fbf714e43c503c8961"} Feb 02 13:18:26 crc kubenswrapper[4955]: I0202 13:18:26.744936 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=40.553196874 podStartE2EDuration="59.74491726s" podCreationTimestamp="2026-02-02 13:17:27 +0000 UTC" firstStartedPulling="2026-02-02 13:18:05.797148084 +0000 UTC m=+936.709484524" lastFinishedPulling="2026-02-02 13:18:24.988868449 +0000 UTC m=+955.901204910" observedRunningTime="2026-02-02 13:18:26.740736629 +0000 UTC m=+957.653073099" watchObservedRunningTime="2026-02-02 13:18:26.74491726 +0000 UTC m=+957.657253710" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.000408 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-fmhxp"] Feb 02 13:18:27 crc kubenswrapper[4955]: E0202 13:18:27.000755 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3544eea9-736e-471c-85b4-b59aab2d5533" containerName="mariadb-account-create-update" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.000768 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="3544eea9-736e-471c-85b4-b59aab2d5533" containerName="mariadb-account-create-update" Feb 02 13:18:27 crc kubenswrapper[4955]: E0202 13:18:27.000777 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22defd53-64bc-47b9-86e8-21563ec3a37f" containerName="mariadb-database-create" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.000784 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="22defd53-64bc-47b9-86e8-21563ec3a37f" containerName="mariadb-database-create" Feb 02 13:18:27 crc kubenswrapper[4955]: E0202 13:18:27.000804 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="765f7521-d8d5-4034-b94c-e64a698c65ae" containerName="mariadb-database-create" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.000813 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="765f7521-d8d5-4034-b94c-e64a698c65ae" containerName="mariadb-database-create" Feb 02 13:18:27 crc kubenswrapper[4955]: E0202 13:18:27.000823 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a847cbe7-3090-4d64-9faf-ed4414d614ad" containerName="mariadb-account-create-update" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.000829 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a847cbe7-3090-4d64-9faf-ed4414d614ad" containerName="mariadb-account-create-update" Feb 02 13:18:27 crc kubenswrapper[4955]: E0202 13:18:27.000841 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27d03337-7374-44ab-8c95-d092c42d2355" containerName="mariadb-account-create-update" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.000848 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="27d03337-7374-44ab-8c95-d092c42d2355" containerName="mariadb-account-create-update" Feb 02 13:18:27 crc kubenswrapper[4955]: E0202 13:18:27.000862 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea27d86-5db9-49fd-b9bf-44176e78d3d6" containerName="mariadb-database-create" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.000868 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea27d86-5db9-49fd-b9bf-44176e78d3d6" containerName="mariadb-database-create" Feb 02 13:18:27 crc kubenswrapper[4955]: E0202 13:18:27.000878 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda69844-00d2-4981-bfbd-1d4ed05274d1" containerName="mariadb-account-create-update" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.000883 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda69844-00d2-4981-bfbd-1d4ed05274d1" containerName="mariadb-account-create-update" Feb 02 13:18:27 crc kubenswrapper[4955]: E0202 13:18:27.000892 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe99b176-e998-4cdd-9cef-32407153cc79" containerName="mariadb-database-create" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.000898 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe99b176-e998-4cdd-9cef-32407153cc79" containerName="mariadb-database-create" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.001060 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="3544eea9-736e-471c-85b4-b59aab2d5533" containerName="mariadb-account-create-update" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.001069 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="27d03337-7374-44ab-8c95-d092c42d2355" containerName="mariadb-account-create-update" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.001077 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda69844-00d2-4981-bfbd-1d4ed05274d1" containerName="mariadb-account-create-update" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.001087 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="22defd53-64bc-47b9-86e8-21563ec3a37f" containerName="mariadb-database-create" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.001094 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea27d86-5db9-49fd-b9bf-44176e78d3d6" containerName="mariadb-database-create" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.001104 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe99b176-e998-4cdd-9cef-32407153cc79" containerName="mariadb-database-create" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.001111 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="765f7521-d8d5-4034-b94c-e64a698c65ae" containerName="mariadb-database-create" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.001118 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="a847cbe7-3090-4d64-9faf-ed4414d614ad" containerName="mariadb-account-create-update" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.001952 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.004181 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.019428 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-fmhxp"] Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.094780 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-fmhxp\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.094897 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-fmhxp\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.094928 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-dns-svc\") pod \"dnsmasq-dns-764c5664d7-fmhxp\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.094967 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-config\") pod \"dnsmasq-dns-764c5664d7-fmhxp\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.094989 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-fmhxp\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.095031 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb9cb\" (UniqueName: \"kubernetes.io/projected/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-kube-api-access-cb9cb\") pod \"dnsmasq-dns-764c5664d7-fmhxp\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.197010 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-fmhxp\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.197097 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-fmhxp\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.197117 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-dns-svc\") pod \"dnsmasq-dns-764c5664d7-fmhxp\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.197146 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-config\") pod \"dnsmasq-dns-764c5664d7-fmhxp\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.197162 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-fmhxp\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.197187 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb9cb\" (UniqueName: \"kubernetes.io/projected/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-kube-api-access-cb9cb\") pod \"dnsmasq-dns-764c5664d7-fmhxp\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.197991 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-fmhxp\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.198096 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-config\") pod \"dnsmasq-dns-764c5664d7-fmhxp\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.198683 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-fmhxp\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.198721 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-dns-svc\") pod \"dnsmasq-dns-764c5664d7-fmhxp\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.198757 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-fmhxp\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.219461 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb9cb\" (UniqueName: \"kubernetes.io/projected/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-kube-api-access-cb9cb\") pod \"dnsmasq-dns-764c5664d7-fmhxp\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.317087 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.500015 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hpct9" podUID="5c399cd3-81fb-4625-8d18-ce2bb6c0b72e" containerName="registry-server" probeResult="failure" output=< Feb 02 13:18:27 crc kubenswrapper[4955]: timeout: failed to connect service ":50051" within 1s Feb 02 13:18:27 crc kubenswrapper[4955]: > Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.711874 4955 generic.go:334] "Generic (PLEG): container finished" podID="d5111fc8-b31a-4644-aae9-5a89e4d5da9a" containerID="1feb7d9e24726f65a8dc626bd665e9c2fca46e0ac33505c58d0d65765b4593b0" exitCode=0 Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.711940 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4r26c" event={"ID":"d5111fc8-b31a-4644-aae9-5a89e4d5da9a","Type":"ContainerDied","Data":"1feb7d9e24726f65a8dc626bd665e9c2fca46e0ac33505c58d0d65765b4593b0"} Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.714277 4955 generic.go:334] "Generic (PLEG): container finished" podID="93af2657-6c4c-4163-aeb1-4527c3a6bf1a" containerID="914d456e1d094fad591bfeace5376b27ca7d57d2845570178729d1674a9b069c" exitCode=0 Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.714325 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6xnc6" event={"ID":"93af2657-6c4c-4163-aeb1-4527c3a6bf1a","Type":"ContainerDied","Data":"914d456e1d094fad591bfeace5376b27ca7d57d2845570178729d1674a9b069c"} Feb 02 13:18:27 crc kubenswrapper[4955]: I0202 13:18:27.778721 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-fmhxp"] Feb 02 13:18:27 crc kubenswrapper[4955]: W0202 13:18:27.781977 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cd9a3cb_4c57_435d_86e7_916155f5f1f4.slice/crio-40d6c72d9435619f548d73d18febceb9ce74d7e6c0651e37358717f5795fa7e0 WatchSource:0}: Error finding container 40d6c72d9435619f548d73d18febceb9ce74d7e6c0651e37358717f5795fa7e0: Status 404 returned error can't find the container with id 40d6c72d9435619f548d73d18febceb9ce74d7e6c0651e37358717f5795fa7e0 Feb 02 13:18:28 crc kubenswrapper[4955]: I0202 13:18:28.724313 4955 generic.go:334] "Generic (PLEG): container finished" podID="2cd9a3cb-4c57-435d-86e7-916155f5f1f4" containerID="175dd5b4cec18355208126369bc922179df03664a5739f5c19aab82b24b28bf0" exitCode=0 Feb 02 13:18:28 crc kubenswrapper[4955]: I0202 13:18:28.724410 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" event={"ID":"2cd9a3cb-4c57-435d-86e7-916155f5f1f4","Type":"ContainerDied","Data":"175dd5b4cec18355208126369bc922179df03664a5739f5c19aab82b24b28bf0"} Feb 02 13:18:28 crc kubenswrapper[4955]: I0202 13:18:28.724670 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" event={"ID":"2cd9a3cb-4c57-435d-86e7-916155f5f1f4","Type":"ContainerStarted","Data":"40d6c72d9435619f548d73d18febceb9ce74d7e6c0651e37358717f5795fa7e0"} Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.021668 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6xnc6" Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.121648 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4r26c" Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.133149 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfqpg\" (UniqueName: \"kubernetes.io/projected/93af2657-6c4c-4163-aeb1-4527c3a6bf1a-kube-api-access-sfqpg\") pod \"93af2657-6c4c-4163-aeb1-4527c3a6bf1a\" (UID: \"93af2657-6c4c-4163-aeb1-4527c3a6bf1a\") " Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.133406 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93af2657-6c4c-4163-aeb1-4527c3a6bf1a-config-data\") pod \"93af2657-6c4c-4163-aeb1-4527c3a6bf1a\" (UID: \"93af2657-6c4c-4163-aeb1-4527c3a6bf1a\") " Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.133600 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93af2657-6c4c-4163-aeb1-4527c3a6bf1a-combined-ca-bundle\") pod \"93af2657-6c4c-4163-aeb1-4527c3a6bf1a\" (UID: \"93af2657-6c4c-4163-aeb1-4527c3a6bf1a\") " Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.139788 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93af2657-6c4c-4163-aeb1-4527c3a6bf1a-kube-api-access-sfqpg" (OuterVolumeSpecName: "kube-api-access-sfqpg") pod "93af2657-6c4c-4163-aeb1-4527c3a6bf1a" (UID: "93af2657-6c4c-4163-aeb1-4527c3a6bf1a"). InnerVolumeSpecName "kube-api-access-sfqpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.176683 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93af2657-6c4c-4163-aeb1-4527c3a6bf1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93af2657-6c4c-4163-aeb1-4527c3a6bf1a" (UID: "93af2657-6c4c-4163-aeb1-4527c3a6bf1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.193701 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93af2657-6c4c-4163-aeb1-4527c3a6bf1a-config-data" (OuterVolumeSpecName: "config-data") pod "93af2657-6c4c-4163-aeb1-4527c3a6bf1a" (UID: "93af2657-6c4c-4163-aeb1-4527c3a6bf1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.234854 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfg9x\" (UniqueName: \"kubernetes.io/projected/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-kube-api-access-cfg9x\") pod \"d5111fc8-b31a-4644-aae9-5a89e4d5da9a\" (UID: \"d5111fc8-b31a-4644-aae9-5a89e4d5da9a\") " Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.234909 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-config-data\") pod \"d5111fc8-b31a-4644-aae9-5a89e4d5da9a\" (UID: \"d5111fc8-b31a-4644-aae9-5a89e4d5da9a\") " Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.235014 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-db-sync-config-data\") pod \"d5111fc8-b31a-4644-aae9-5a89e4d5da9a\" (UID: \"d5111fc8-b31a-4644-aae9-5a89e4d5da9a\") " Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.235114 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-combined-ca-bundle\") pod \"d5111fc8-b31a-4644-aae9-5a89e4d5da9a\" (UID: \"d5111fc8-b31a-4644-aae9-5a89e4d5da9a\") " Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.235475 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93af2657-6c4c-4163-aeb1-4527c3a6bf1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.235497 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfqpg\" (UniqueName: \"kubernetes.io/projected/93af2657-6c4c-4163-aeb1-4527c3a6bf1a-kube-api-access-sfqpg\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.235510 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93af2657-6c4c-4163-aeb1-4527c3a6bf1a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.239776 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d5111fc8-b31a-4644-aae9-5a89e4d5da9a" (UID: "d5111fc8-b31a-4644-aae9-5a89e4d5da9a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.239828 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-kube-api-access-cfg9x" (OuterVolumeSpecName: "kube-api-access-cfg9x") pod "d5111fc8-b31a-4644-aae9-5a89e4d5da9a" (UID: "d5111fc8-b31a-4644-aae9-5a89e4d5da9a"). InnerVolumeSpecName "kube-api-access-cfg9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.259780 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5111fc8-b31a-4644-aae9-5a89e4d5da9a" (UID: "d5111fc8-b31a-4644-aae9-5a89e4d5da9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.280185 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-config-data" (OuterVolumeSpecName: "config-data") pod "d5111fc8-b31a-4644-aae9-5a89e4d5da9a" (UID: "d5111fc8-b31a-4644-aae9-5a89e4d5da9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.337265 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.337298 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfg9x\" (UniqueName: \"kubernetes.io/projected/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-kube-api-access-cfg9x\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.337311 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.337320 4955 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5111fc8-b31a-4644-aae9-5a89e4d5da9a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.737312 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" event={"ID":"2cd9a3cb-4c57-435d-86e7-916155f5f1f4","Type":"ContainerStarted","Data":"b81edc2768fb86f5a82021a97ebe11a96b20cd1a5d5c74dd154c134cae01841d"} Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.738212 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.744113 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6xnc6" event={"ID":"93af2657-6c4c-4163-aeb1-4527c3a6bf1a","Type":"ContainerDied","Data":"3add40f81462d0f20f600308603363bc7c648cee0010449d28f9c7c974840acd"} Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.744161 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3add40f81462d0f20f600308603363bc7c648cee0010449d28f9c7c974840acd" Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.744234 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6xnc6" Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.751593 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4r26c" event={"ID":"d5111fc8-b31a-4644-aae9-5a89e4d5da9a","Type":"ContainerDied","Data":"f78f46c7388410a7f2f58a2518cc819516e8d71a4875a086f6a90cf12a011266"} Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.751638 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f78f46c7388410a7f2f58a2518cc819516e8d71a4875a086f6a90cf12a011266" Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.751700 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4r26c" Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.774602 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" podStartSLOduration=3.774582928 podStartE2EDuration="3.774582928s" podCreationTimestamp="2026-02-02 13:18:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:18:29.757936956 +0000 UTC m=+960.670273406" watchObservedRunningTime="2026-02-02 13:18:29.774582928 +0000 UTC m=+960.686919388" Feb 02 13:18:29 crc kubenswrapper[4955]: I0202 13:18:29.979678 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-fmhxp"] Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.005799 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-kzv6z"] Feb 02 13:18:30 crc kubenswrapper[4955]: E0202 13:18:30.006135 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93af2657-6c4c-4163-aeb1-4527c3a6bf1a" containerName="keystone-db-sync" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.006152 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="93af2657-6c4c-4163-aeb1-4527c3a6bf1a" containerName="keystone-db-sync" Feb 02 13:18:30 crc kubenswrapper[4955]: E0202 13:18:30.006163 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5111fc8-b31a-4644-aae9-5a89e4d5da9a" containerName="glance-db-sync" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.006170 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5111fc8-b31a-4644-aae9-5a89e4d5da9a" containerName="glance-db-sync" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.006332 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5111fc8-b31a-4644-aae9-5a89e4d5da9a" containerName="glance-db-sync" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.006355 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="93af2657-6c4c-4163-aeb1-4527c3a6bf1a" containerName="keystone-db-sync" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.011148 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-kzv6z" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.035448 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-kzv6z"] Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.113971 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vb8sd"] Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.115214 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vb8sd" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.122466 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.122672 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.122722 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.122910 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jnmgd" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.123230 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.135115 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vb8sd"] Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.159804 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-kzv6z\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " pod="openstack/dnsmasq-dns-5959f8865f-kzv6z" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.159888 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwx2z\" (UniqueName: \"kubernetes.io/projected/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-kube-api-access-fwx2z\") pod \"dnsmasq-dns-5959f8865f-kzv6z\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " pod="openstack/dnsmasq-dns-5959f8865f-kzv6z" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.159986 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-kzv6z\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " pod="openstack/dnsmasq-dns-5959f8865f-kzv6z" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.160056 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-dns-svc\") pod \"dnsmasq-dns-5959f8865f-kzv6z\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " pod="openstack/dnsmasq-dns-5959f8865f-kzv6z" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.160083 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-config\") pod \"dnsmasq-dns-5959f8865f-kzv6z\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " pod="openstack/dnsmasq-dns-5959f8865f-kzv6z" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.160107 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-kzv6z\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " pod="openstack/dnsmasq-dns-5959f8865f-kzv6z" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.217203 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-kzv6z"] Feb 02 13:18:30 crc kubenswrapper[4955]: E0202 13:18:30.217879 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-fwx2z ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5959f8865f-kzv6z" podUID="1d1ab56a-84b3-4a03-87ea-93b40760c8bf" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.261130 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-config-data\") pod \"keystone-bootstrap-vb8sd\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " pod="openstack/keystone-bootstrap-vb8sd" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.261493 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-kzv6z\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " pod="openstack/dnsmasq-dns-5959f8865f-kzv6z" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.261534 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-scripts\") pod \"keystone-bootstrap-vb8sd\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " pod="openstack/keystone-bootstrap-vb8sd" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.261657 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-dns-svc\") pod \"dnsmasq-dns-5959f8865f-kzv6z\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " pod="openstack/dnsmasq-dns-5959f8865f-kzv6z" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.261685 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qklm4\" (UniqueName: \"kubernetes.io/projected/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-kube-api-access-qklm4\") pod \"keystone-bootstrap-vb8sd\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " pod="openstack/keystone-bootstrap-vb8sd" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.261709 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-config\") pod \"dnsmasq-dns-5959f8865f-kzv6z\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " pod="openstack/dnsmasq-dns-5959f8865f-kzv6z" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.261736 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-fernet-keys\") pod \"keystone-bootstrap-vb8sd\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " pod="openstack/keystone-bootstrap-vb8sd" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.261760 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-kzv6z\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " pod="openstack/dnsmasq-dns-5959f8865f-kzv6z" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.261805 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-kzv6z\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " pod="openstack/dnsmasq-dns-5959f8865f-kzv6z" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.261835 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwx2z\" (UniqueName: \"kubernetes.io/projected/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-kube-api-access-fwx2z\") pod \"dnsmasq-dns-5959f8865f-kzv6z\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " pod="openstack/dnsmasq-dns-5959f8865f-kzv6z" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.261875 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-credential-keys\") pod \"keystone-bootstrap-vb8sd\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " pod="openstack/keystone-bootstrap-vb8sd" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.261909 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-combined-ca-bundle\") pod \"keystone-bootstrap-vb8sd\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " pod="openstack/keystone-bootstrap-vb8sd" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.262927 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-kzv6z\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " pod="openstack/dnsmasq-dns-5959f8865f-kzv6z" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.265634 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-config\") pod \"dnsmasq-dns-5959f8865f-kzv6z\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " pod="openstack/dnsmasq-dns-5959f8865f-kzv6z" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.265677 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-kzv6z\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " pod="openstack/dnsmasq-dns-5959f8865f-kzv6z" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.266569 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-kzv6z\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " pod="openstack/dnsmasq-dns-5959f8865f-kzv6z" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.268890 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-dns-svc\") pod \"dnsmasq-dns-5959f8865f-kzv6z\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " pod="openstack/dnsmasq-dns-5959f8865f-kzv6z" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.275916 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-7nhr6"] Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.287511 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.302942 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-jdrff"] Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.303913 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jdrff" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.308596 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwx2z\" (UniqueName: \"kubernetes.io/projected/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-kube-api-access-fwx2z\") pod \"dnsmasq-dns-5959f8865f-kzv6z\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " pod="openstack/dnsmasq-dns-5959f8865f-kzv6z" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.326009 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.326153 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-h6vsl" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.326342 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-7nhr6"] Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.370457 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-7nhr6\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.370508 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-config\") pod \"dnsmasq-dns-847c4cc679-7nhr6\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.370527 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b66af4-aa8a-4739-8ed1-d55f066b5505-config-data\") pod \"heat-db-sync-jdrff\" (UID: \"72b66af4-aa8a-4739-8ed1-d55f066b5505\") " pod="openstack/heat-db-sync-jdrff" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.370575 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-credential-keys\") pod \"keystone-bootstrap-vb8sd\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " pod="openstack/keystone-bootstrap-vb8sd" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.370606 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-combined-ca-bundle\") pod \"keystone-bootstrap-vb8sd\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " pod="openstack/keystone-bootstrap-vb8sd" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.370625 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7fq4\" (UniqueName: \"kubernetes.io/projected/72b66af4-aa8a-4739-8ed1-d55f066b5505-kube-api-access-z7fq4\") pod \"heat-db-sync-jdrff\" (UID: \"72b66af4-aa8a-4739-8ed1-d55f066b5505\") " pod="openstack/heat-db-sync-jdrff" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.370659 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b66af4-aa8a-4739-8ed1-d55f066b5505-combined-ca-bundle\") pod \"heat-db-sync-jdrff\" (UID: \"72b66af4-aa8a-4739-8ed1-d55f066b5505\") " pod="openstack/heat-db-sync-jdrff" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.370685 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-7nhr6\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.370699 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-config-data\") pod \"keystone-bootstrap-vb8sd\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " pod="openstack/keystone-bootstrap-vb8sd" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.370719 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7m9s\" (UniqueName: \"kubernetes.io/projected/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-kube-api-access-w7m9s\") pod \"dnsmasq-dns-847c4cc679-7nhr6\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.370747 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-scripts\") pod \"keystone-bootstrap-vb8sd\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " pod="openstack/keystone-bootstrap-vb8sd" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.370770 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-dns-svc\") pod \"dnsmasq-dns-847c4cc679-7nhr6\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.370788 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-7nhr6\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.370822 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qklm4\" (UniqueName: \"kubernetes.io/projected/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-kube-api-access-qklm4\") pod \"keystone-bootstrap-vb8sd\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " pod="openstack/keystone-bootstrap-vb8sd" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.370841 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-fernet-keys\") pod \"keystone-bootstrap-vb8sd\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " pod="openstack/keystone-bootstrap-vb8sd" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.375310 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-jdrff"] Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.392385 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-credential-keys\") pod \"keystone-bootstrap-vb8sd\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " pod="openstack/keystone-bootstrap-vb8sd" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.405307 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-fernet-keys\") pod \"keystone-bootstrap-vb8sd\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " pod="openstack/keystone-bootstrap-vb8sd" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.406127 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-combined-ca-bundle\") pod \"keystone-bootstrap-vb8sd\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " pod="openstack/keystone-bootstrap-vb8sd" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.406388 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-scripts\") pod \"keystone-bootstrap-vb8sd\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " pod="openstack/keystone-bootstrap-vb8sd" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.407078 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-config-data\") pod \"keystone-bootstrap-vb8sd\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " pod="openstack/keystone-bootstrap-vb8sd" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.447521 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-wjvr2"] Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.472422 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wjvr2" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.472519 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7fq4\" (UniqueName: \"kubernetes.io/projected/72b66af4-aa8a-4739-8ed1-d55f066b5505-kube-api-access-z7fq4\") pod \"heat-db-sync-jdrff\" (UID: \"72b66af4-aa8a-4739-8ed1-d55f066b5505\") " pod="openstack/heat-db-sync-jdrff" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.472612 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b66af4-aa8a-4739-8ed1-d55f066b5505-combined-ca-bundle\") pod \"heat-db-sync-jdrff\" (UID: \"72b66af4-aa8a-4739-8ed1-d55f066b5505\") " pod="openstack/heat-db-sync-jdrff" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.472649 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-7nhr6\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.472676 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7m9s\" (UniqueName: \"kubernetes.io/projected/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-kube-api-access-w7m9s\") pod \"dnsmasq-dns-847c4cc679-7nhr6\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.472716 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-dns-svc\") pod \"dnsmasq-dns-847c4cc679-7nhr6\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.472739 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-7nhr6\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.472819 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-7nhr6\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.472844 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-config\") pod \"dnsmasq-dns-847c4cc679-7nhr6\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.472863 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b66af4-aa8a-4739-8ed1-d55f066b5505-config-data\") pod \"heat-db-sync-jdrff\" (UID: \"72b66af4-aa8a-4739-8ed1-d55f066b5505\") " pod="openstack/heat-db-sync-jdrff" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.476303 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-7nhr6\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.483124 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qklm4\" (UniqueName: \"kubernetes.io/projected/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-kube-api-access-qklm4\") pod \"keystone-bootstrap-vb8sd\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " pod="openstack/keystone-bootstrap-vb8sd" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.487215 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-55npv" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.488833 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.489118 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-7nhr6\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.489235 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.496544 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b66af4-aa8a-4739-8ed1-d55f066b5505-combined-ca-bundle\") pod \"heat-db-sync-jdrff\" (UID: \"72b66af4-aa8a-4739-8ed1-d55f066b5505\") " pod="openstack/heat-db-sync-jdrff" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.502819 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-config\") pod \"dnsmasq-dns-847c4cc679-7nhr6\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.509254 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-dns-svc\") pod \"dnsmasq-dns-847c4cc679-7nhr6\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.511580 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b66af4-aa8a-4739-8ed1-d55f066b5505-config-data\") pod \"heat-db-sync-jdrff\" (UID: \"72b66af4-aa8a-4739-8ed1-d55f066b5505\") " pod="openstack/heat-db-sync-jdrff" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.522367 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-7nhr6\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.533819 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wjvr2"] Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.553454 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7fq4\" (UniqueName: \"kubernetes.io/projected/72b66af4-aa8a-4739-8ed1-d55f066b5505-kube-api-access-z7fq4\") pod \"heat-db-sync-jdrff\" (UID: \"72b66af4-aa8a-4739-8ed1-d55f066b5505\") " pod="openstack/heat-db-sync-jdrff" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.555312 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7m9s\" (UniqueName: \"kubernetes.io/projected/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-kube-api-access-w7m9s\") pod \"dnsmasq-dns-847c4cc679-7nhr6\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.575328 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-config-data\") pod \"cinder-db-sync-wjvr2\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " pod="openstack/cinder-db-sync-wjvr2" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.575392 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-scripts\") pod \"cinder-db-sync-wjvr2\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " pod="openstack/cinder-db-sync-wjvr2" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.575415 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-db-sync-config-data\") pod \"cinder-db-sync-wjvr2\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " pod="openstack/cinder-db-sync-wjvr2" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.575430 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2nmh\" (UniqueName: \"kubernetes.io/projected/462f37c8-5909-418b-bf1f-58af764957ab-kube-api-access-g2nmh\") pod \"cinder-db-sync-wjvr2\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " pod="openstack/cinder-db-sync-wjvr2" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.575482 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/462f37c8-5909-418b-bf1f-58af764957ab-etc-machine-id\") pod \"cinder-db-sync-wjvr2\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " pod="openstack/cinder-db-sync-wjvr2" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.575505 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-combined-ca-bundle\") pod \"cinder-db-sync-wjvr2\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " pod="openstack/cinder-db-sync-wjvr2" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.582986 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.589383 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.596037 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.596234 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.596724 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jdrff" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.667196 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.681787 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/462f37c8-5909-418b-bf1f-58af764957ab-etc-machine-id\") pod \"cinder-db-sync-wjvr2\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " pod="openstack/cinder-db-sync-wjvr2" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.682031 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-combined-ca-bundle\") pod \"cinder-db-sync-wjvr2\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " pod="openstack/cinder-db-sync-wjvr2" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.682111 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-scripts\") pod \"ceilometer-0\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " pod="openstack/ceilometer-0" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.682185 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a510324b-16f3-4585-abc9-ae66997c2987-log-httpd\") pod \"ceilometer-0\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " pod="openstack/ceilometer-0" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.682281 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " pod="openstack/ceilometer-0" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.682392 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-config-data\") pod \"ceilometer-0\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " pod="openstack/ceilometer-0" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.682469 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-config-data\") pod \"cinder-db-sync-wjvr2\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " pod="openstack/cinder-db-sync-wjvr2" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.682549 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvmj9\" (UniqueName: \"kubernetes.io/projected/a510324b-16f3-4585-abc9-ae66997c2987-kube-api-access-zvmj9\") pod \"ceilometer-0\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " pod="openstack/ceilometer-0" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.682708 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-scripts\") pod \"cinder-db-sync-wjvr2\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " pod="openstack/cinder-db-sync-wjvr2" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.682806 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-db-sync-config-data\") pod \"cinder-db-sync-wjvr2\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " pod="openstack/cinder-db-sync-wjvr2" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.682887 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2nmh\" (UniqueName: \"kubernetes.io/projected/462f37c8-5909-418b-bf1f-58af764957ab-kube-api-access-g2nmh\") pod \"cinder-db-sync-wjvr2\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " pod="openstack/cinder-db-sync-wjvr2" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.682998 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " pod="openstack/ceilometer-0" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.683071 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a510324b-16f3-4585-abc9-ae66997c2987-run-httpd\") pod \"ceilometer-0\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " pod="openstack/ceilometer-0" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.683551 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/462f37c8-5909-418b-bf1f-58af764957ab-etc-machine-id\") pod \"cinder-db-sync-wjvr2\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " pod="openstack/cinder-db-sync-wjvr2" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.693246 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-combined-ca-bundle\") pod \"cinder-db-sync-wjvr2\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " pod="openstack/cinder-db-sync-wjvr2" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.710654 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-config-data\") pod \"cinder-db-sync-wjvr2\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " pod="openstack/cinder-db-sync-wjvr2" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.710806 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-db-sync-config-data\") pod \"cinder-db-sync-wjvr2\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " pod="openstack/cinder-db-sync-wjvr2" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.736197 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-scripts\") pod \"cinder-db-sync-wjvr2\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " pod="openstack/cinder-db-sync-wjvr2" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.737718 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vb8sd" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.747194 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2nmh\" (UniqueName: \"kubernetes.io/projected/462f37c8-5909-418b-bf1f-58af764957ab-kube-api-access-g2nmh\") pod \"cinder-db-sync-wjvr2\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " pod="openstack/cinder-db-sync-wjvr2" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.758580 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-9wmph"] Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.760814 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9wmph" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.762843 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-kzv6z" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.789578 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.789787 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nwdv4" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.789928 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.791297 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " pod="openstack/ceilometer-0" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.791331 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-config-data\") pod \"ceilometer-0\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " pod="openstack/ceilometer-0" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.791369 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvmj9\" (UniqueName: \"kubernetes.io/projected/a510324b-16f3-4585-abc9-ae66997c2987-kube-api-access-zvmj9\") pod \"ceilometer-0\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " pod="openstack/ceilometer-0" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.791433 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " pod="openstack/ceilometer-0" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.791448 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a510324b-16f3-4585-abc9-ae66997c2987-run-httpd\") pod \"ceilometer-0\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " pod="openstack/ceilometer-0" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.799473 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-scripts\") pod \"ceilometer-0\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " pod="openstack/ceilometer-0" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.799615 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a510324b-16f3-4585-abc9-ae66997c2987-log-httpd\") pod \"ceilometer-0\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " pod="openstack/ceilometer-0" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.800189 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a510324b-16f3-4585-abc9-ae66997c2987-log-httpd\") pod \"ceilometer-0\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " pod="openstack/ceilometer-0" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.805990 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a510324b-16f3-4585-abc9-ae66997c2987-run-httpd\") pod \"ceilometer-0\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " pod="openstack/ceilometer-0" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.810245 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " pod="openstack/ceilometer-0" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.817744 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-config-data\") pod \"ceilometer-0\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " pod="openstack/ceilometer-0" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.817774 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9wmph"] Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.820384 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-scripts\") pod \"ceilometer-0\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " pod="openstack/ceilometer-0" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.846981 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " pod="openstack/ceilometer-0" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.881200 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.888088 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-kzv6z" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.907825 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a474135c-7a61-46ee-af96-680f7139539b-combined-ca-bundle\") pod \"neutron-db-sync-9wmph\" (UID: \"a474135c-7a61-46ee-af96-680f7139539b\") " pod="openstack/neutron-db-sync-9wmph" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.907889 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf9tc\" (UniqueName: \"kubernetes.io/projected/a474135c-7a61-46ee-af96-680f7139539b-kube-api-access-qf9tc\") pod \"neutron-db-sync-9wmph\" (UID: \"a474135c-7a61-46ee-af96-680f7139539b\") " pod="openstack/neutron-db-sync-9wmph" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.907913 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a474135c-7a61-46ee-af96-680f7139539b-config\") pod \"neutron-db-sync-9wmph\" (UID: \"a474135c-7a61-46ee-af96-680f7139539b\") " pod="openstack/neutron-db-sync-9wmph" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.913886 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wjvr2" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.938060 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvmj9\" (UniqueName: \"kubernetes.io/projected/a510324b-16f3-4585-abc9-ae66997c2987-kube-api-access-zvmj9\") pod \"ceilometer-0\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " pod="openstack/ceilometer-0" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.967446 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-rvmkt"] Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.968686 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rvmkt" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.980073 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.980582 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-7t2ql" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.981202 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.983914 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rvmkt"] Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.992105 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-4q9j2"] Feb 02 13:18:30 crc kubenswrapper[4955]: I0202 13:18:30.993162 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4q9j2" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.004473 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4q9j2"] Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.006240 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.006318 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-td7s9" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.009506 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-config\") pod \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.009614 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-ovsdbserver-nb\") pod \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.009669 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwx2z\" (UniqueName: \"kubernetes.io/projected/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-kube-api-access-fwx2z\") pod \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.009734 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-ovsdbserver-sb\") pod \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.009796 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-dns-svc\") pod \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.009842 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-dns-swift-storage-0\") pod \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\" (UID: \"1d1ab56a-84b3-4a03-87ea-93b40760c8bf\") " Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.010072 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a474135c-7a61-46ee-af96-680f7139539b-combined-ca-bundle\") pod \"neutron-db-sync-9wmph\" (UID: \"a474135c-7a61-46ee-af96-680f7139539b\") " pod="openstack/neutron-db-sync-9wmph" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.010117 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a474135c-7a61-46ee-af96-680f7139539b-config\") pod \"neutron-db-sync-9wmph\" (UID: \"a474135c-7a61-46ee-af96-680f7139539b\") " pod="openstack/neutron-db-sync-9wmph" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.010134 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf9tc\" (UniqueName: \"kubernetes.io/projected/a474135c-7a61-46ee-af96-680f7139539b-kube-api-access-qf9tc\") pod \"neutron-db-sync-9wmph\" (UID: \"a474135c-7a61-46ee-af96-680f7139539b\") " pod="openstack/neutron-db-sync-9wmph" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.011108 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d1ab56a-84b3-4a03-87ea-93b40760c8bf" (UID: "1d1ab56a-84b3-4a03-87ea-93b40760c8bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.011427 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1d1ab56a-84b3-4a03-87ea-93b40760c8bf" (UID: "1d1ab56a-84b3-4a03-87ea-93b40760c8bf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.011732 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1d1ab56a-84b3-4a03-87ea-93b40760c8bf" (UID: "1d1ab56a-84b3-4a03-87ea-93b40760c8bf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.012234 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1d1ab56a-84b3-4a03-87ea-93b40760c8bf" (UID: "1d1ab56a-84b3-4a03-87ea-93b40760c8bf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.012374 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-7nhr6"] Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.013946 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-config" (OuterVolumeSpecName: "config") pod "1d1ab56a-84b3-4a03-87ea-93b40760c8bf" (UID: "1d1ab56a-84b3-4a03-87ea-93b40760c8bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.022972 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a474135c-7a61-46ee-af96-680f7139539b-config\") pod \"neutron-db-sync-9wmph\" (UID: \"a474135c-7a61-46ee-af96-680f7139539b\") " pod="openstack/neutron-db-sync-9wmph" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.041904 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-kube-api-access-fwx2z" (OuterVolumeSpecName: "kube-api-access-fwx2z") pod "1d1ab56a-84b3-4a03-87ea-93b40760c8bf" (UID: "1d1ab56a-84b3-4a03-87ea-93b40760c8bf"). InnerVolumeSpecName "kube-api-access-fwx2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.042980 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a474135c-7a61-46ee-af96-680f7139539b-combined-ca-bundle\") pod \"neutron-db-sync-9wmph\" (UID: \"a474135c-7a61-46ee-af96-680f7139539b\") " pod="openstack/neutron-db-sync-9wmph" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.060882 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-hdvcn"] Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.062236 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf9tc\" (UniqueName: \"kubernetes.io/projected/a474135c-7a61-46ee-af96-680f7139539b-kube-api-access-qf9tc\") pod \"neutron-db-sync-9wmph\" (UID: \"a474135c-7a61-46ee-af96-680f7139539b\") " pod="openstack/neutron-db-sync-9wmph" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.064945 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.085112 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-hdvcn"] Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.111162 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhhc5\" (UniqueName: \"kubernetes.io/projected/01fe12bd-03cd-402a-89a5-db886a443423-kube-api-access-rhhc5\") pod \"placement-db-sync-rvmkt\" (UID: \"01fe12bd-03cd-402a-89a5-db886a443423\") " pod="openstack/placement-db-sync-rvmkt" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.111312 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8aafab-3905-4e44-ba3e-134253a38a60-combined-ca-bundle\") pod \"barbican-db-sync-4q9j2\" (UID: \"fc8aafab-3905-4e44-ba3e-134253a38a60\") " pod="openstack/barbican-db-sync-4q9j2" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.111387 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01fe12bd-03cd-402a-89a5-db886a443423-logs\") pod \"placement-db-sync-rvmkt\" (UID: \"01fe12bd-03cd-402a-89a5-db886a443423\") " pod="openstack/placement-db-sync-rvmkt" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.111482 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01fe12bd-03cd-402a-89a5-db886a443423-config-data\") pod \"placement-db-sync-rvmkt\" (UID: \"01fe12bd-03cd-402a-89a5-db886a443423\") " pod="openstack/placement-db-sync-rvmkt" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.111635 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc8aafab-3905-4e44-ba3e-134253a38a60-db-sync-config-data\") pod \"barbican-db-sync-4q9j2\" (UID: \"fc8aafab-3905-4e44-ba3e-134253a38a60\") " pod="openstack/barbican-db-sync-4q9j2" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.111729 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01fe12bd-03cd-402a-89a5-db886a443423-scripts\") pod \"placement-db-sync-rvmkt\" (UID: \"01fe12bd-03cd-402a-89a5-db886a443423\") " pod="openstack/placement-db-sync-rvmkt" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.111813 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p767d\" (UniqueName: \"kubernetes.io/projected/fc8aafab-3905-4e44-ba3e-134253a38a60-kube-api-access-p767d\") pod \"barbican-db-sync-4q9j2\" (UID: \"fc8aafab-3905-4e44-ba3e-134253a38a60\") " pod="openstack/barbican-db-sync-4q9j2" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.111897 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01fe12bd-03cd-402a-89a5-db886a443423-combined-ca-bundle\") pod \"placement-db-sync-rvmkt\" (UID: \"01fe12bd-03cd-402a-89a5-db886a443423\") " pod="openstack/placement-db-sync-rvmkt" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.112000 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.112066 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwx2z\" (UniqueName: \"kubernetes.io/projected/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-kube-api-access-fwx2z\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.112169 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.112243 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.112315 4955 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.114033 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1ab56a-84b3-4a03-87ea-93b40760c8bf-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.181104 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9wmph" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.215790 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-hdvcn\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.215864 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01fe12bd-03cd-402a-89a5-db886a443423-scripts\") pod \"placement-db-sync-rvmkt\" (UID: \"01fe12bd-03cd-402a-89a5-db886a443423\") " pod="openstack/placement-db-sync-rvmkt" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.215932 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p767d\" (UniqueName: \"kubernetes.io/projected/fc8aafab-3905-4e44-ba3e-134253a38a60-kube-api-access-p767d\") pod \"barbican-db-sync-4q9j2\" (UID: \"fc8aafab-3905-4e44-ba3e-134253a38a60\") " pod="openstack/barbican-db-sync-4q9j2" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.215979 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-hdvcn\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.216001 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01fe12bd-03cd-402a-89a5-db886a443423-combined-ca-bundle\") pod \"placement-db-sync-rvmkt\" (UID: \"01fe12bd-03cd-402a-89a5-db886a443423\") " pod="openstack/placement-db-sync-rvmkt" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.216025 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhhc5\" (UniqueName: \"kubernetes.io/projected/01fe12bd-03cd-402a-89a5-db886a443423-kube-api-access-rhhc5\") pod \"placement-db-sync-rvmkt\" (UID: \"01fe12bd-03cd-402a-89a5-db886a443423\") " pod="openstack/placement-db-sync-rvmkt" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.216044 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-hdvcn\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.216065 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8aafab-3905-4e44-ba3e-134253a38a60-combined-ca-bundle\") pod \"barbican-db-sync-4q9j2\" (UID: \"fc8aafab-3905-4e44-ba3e-134253a38a60\") " pod="openstack/barbican-db-sync-4q9j2" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.216083 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01fe12bd-03cd-402a-89a5-db886a443423-logs\") pod \"placement-db-sync-rvmkt\" (UID: \"01fe12bd-03cd-402a-89a5-db886a443423\") " pod="openstack/placement-db-sync-rvmkt" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.216115 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-config\") pod \"dnsmasq-dns-785d8bcb8c-hdvcn\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.216142 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01fe12bd-03cd-402a-89a5-db886a443423-config-data\") pod \"placement-db-sync-rvmkt\" (UID: \"01fe12bd-03cd-402a-89a5-db886a443423\") " pod="openstack/placement-db-sync-rvmkt" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.216168 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-hdvcn\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.216214 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2sbm\" (UniqueName: \"kubernetes.io/projected/009d1cab-2cab-475b-9722-881450cee4a5-kube-api-access-c2sbm\") pod \"dnsmasq-dns-785d8bcb8c-hdvcn\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.216237 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc8aafab-3905-4e44-ba3e-134253a38a60-db-sync-config-data\") pod \"barbican-db-sync-4q9j2\" (UID: \"fc8aafab-3905-4e44-ba3e-134253a38a60\") " pod="openstack/barbican-db-sync-4q9j2" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.220635 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01fe12bd-03cd-402a-89a5-db886a443423-logs\") pod \"placement-db-sync-rvmkt\" (UID: \"01fe12bd-03cd-402a-89a5-db886a443423\") " pod="openstack/placement-db-sync-rvmkt" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.227809 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8aafab-3905-4e44-ba3e-134253a38a60-combined-ca-bundle\") pod \"barbican-db-sync-4q9j2\" (UID: \"fc8aafab-3905-4e44-ba3e-134253a38a60\") " pod="openstack/barbican-db-sync-4q9j2" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.228679 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01fe12bd-03cd-402a-89a5-db886a443423-config-data\") pod \"placement-db-sync-rvmkt\" (UID: \"01fe12bd-03cd-402a-89a5-db886a443423\") " pod="openstack/placement-db-sync-rvmkt" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.229155 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01fe12bd-03cd-402a-89a5-db886a443423-combined-ca-bundle\") pod \"placement-db-sync-rvmkt\" (UID: \"01fe12bd-03cd-402a-89a5-db886a443423\") " pod="openstack/placement-db-sync-rvmkt" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.231928 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01fe12bd-03cd-402a-89a5-db886a443423-scripts\") pod \"placement-db-sync-rvmkt\" (UID: \"01fe12bd-03cd-402a-89a5-db886a443423\") " pod="openstack/placement-db-sync-rvmkt" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.232092 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.233153 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc8aafab-3905-4e44-ba3e-134253a38a60-db-sync-config-data\") pod \"barbican-db-sync-4q9j2\" (UID: \"fc8aafab-3905-4e44-ba3e-134253a38a60\") " pod="openstack/barbican-db-sync-4q9j2" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.250584 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhhc5\" (UniqueName: \"kubernetes.io/projected/01fe12bd-03cd-402a-89a5-db886a443423-kube-api-access-rhhc5\") pod \"placement-db-sync-rvmkt\" (UID: \"01fe12bd-03cd-402a-89a5-db886a443423\") " pod="openstack/placement-db-sync-rvmkt" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.251546 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p767d\" (UniqueName: \"kubernetes.io/projected/fc8aafab-3905-4e44-ba3e-134253a38a60-kube-api-access-p767d\") pod \"barbican-db-sync-4q9j2\" (UID: \"fc8aafab-3905-4e44-ba3e-134253a38a60\") " pod="openstack/barbican-db-sync-4q9j2" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.279804 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.281145 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.289214 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.289243 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2fcpn" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.289469 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.301865 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.317905 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-config\") pod \"dnsmasq-dns-785d8bcb8c-hdvcn\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.317947 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-hdvcn\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.317997 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2sbm\" (UniqueName: \"kubernetes.io/projected/009d1cab-2cab-475b-9722-881450cee4a5-kube-api-access-c2sbm\") pod \"dnsmasq-dns-785d8bcb8c-hdvcn\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.318036 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-hdvcn\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.318101 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-hdvcn\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.318132 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-hdvcn\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.319353 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-hdvcn\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.319938 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-config\") pod \"dnsmasq-dns-785d8bcb8c-hdvcn\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.320005 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-hdvcn\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.320062 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-hdvcn\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.323472 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-hdvcn\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.348257 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rvmkt" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.365265 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2sbm\" (UniqueName: \"kubernetes.io/projected/009d1cab-2cab-475b-9722-881450cee4a5-kube-api-access-c2sbm\") pod \"dnsmasq-dns-785d8bcb8c-hdvcn\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.373584 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4q9j2" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.380696 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-jdrff"] Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.402339 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.424096 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.424300 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxkrd\" (UniqueName: \"kubernetes.io/projected/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-kube-api-access-hxkrd\") pod \"glance-default-external-api-0\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.424338 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.424390 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.424422 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-logs\") pod \"glance-default-external-api-0\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.424481 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-scripts\") pod \"glance-default-external-api-0\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.424572 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-config-data\") pod \"glance-default-external-api-0\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.463745 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vb8sd"] Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.527482 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.527540 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-logs\") pod \"glance-default-external-api-0\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.527598 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-scripts\") pod \"glance-default-external-api-0\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.527651 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-config-data\") pod \"glance-default-external-api-0\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.527718 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.527799 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxkrd\" (UniqueName: \"kubernetes.io/projected/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-kube-api-access-hxkrd\") pod \"glance-default-external-api-0\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.527821 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.528303 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.529777 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.529931 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-logs\") pod \"glance-default-external-api-0\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.534882 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.538151 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-config-data\") pod \"glance-default-external-api-0\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.545457 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-scripts\") pod \"glance-default-external-api-0\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.555520 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxkrd\" (UniqueName: \"kubernetes.io/projected/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-kube-api-access-hxkrd\") pod \"glance-default-external-api-0\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.572531 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.612834 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.676104 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wjvr2"] Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.687512 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-7nhr6"] Feb 02 13:18:31 crc kubenswrapper[4955]: W0202 13:18:31.740819 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod462f37c8_5909_418b_bf1f_58af764957ab.slice/crio-ceea223a8e85bb0f2b19a3514815c0a4410f0e003dd8e57ac234d2fac044d49f WatchSource:0}: Error finding container ceea223a8e85bb0f2b19a3514815c0a4410f0e003dd8e57ac234d2fac044d49f: Status 404 returned error can't find the container with id ceea223a8e85bb0f2b19a3514815c0a4410f0e003dd8e57ac234d2fac044d49f Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.813408 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wjvr2" event={"ID":"462f37c8-5909-418b-bf1f-58af764957ab","Type":"ContainerStarted","Data":"ceea223a8e85bb0f2b19a3514815c0a4410f0e003dd8e57ac234d2fac044d49f"} Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.818128 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jdrff" event={"ID":"72b66af4-aa8a-4739-8ed1-d55f066b5505","Type":"ContainerStarted","Data":"fe9cb1cbef57253b557390b477f744085bfe6293e98130b1dd9c62ab229163a8"} Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.823109 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" event={"ID":"7efe6000-a81b-4c97-acf2-a1a02bc80ecc","Type":"ContainerStarted","Data":"355ef4d85c360e6fbaea733a86f5251ed27cfae7aa34f0d066093d7ec0006e3f"} Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.827891 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" podUID="2cd9a3cb-4c57-435d-86e7-916155f5f1f4" containerName="dnsmasq-dns" containerID="cri-o://b81edc2768fb86f5a82021a97ebe11a96b20cd1a5d5c74dd154c134cae01841d" gracePeriod=10 Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.828139 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vb8sd" event={"ID":"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4","Type":"ContainerStarted","Data":"cdd52a6988c154cc1d5fe8008d15767cdad6e44a0b4d34c2dd4a80ec87631ff2"} Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.828190 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-kzv6z" Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.880327 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9wmph"] Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.917906 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-kzv6z"] Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.948008 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.975620 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-kzv6z"] Feb 02 13:18:31 crc kubenswrapper[4955]: I0202 13:18:31.984370 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-hdvcn"] Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.030976 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.034291 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.040415 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.069705 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:18:32 crc kubenswrapper[4955]: W0202 13:18:32.094756 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc8aafab_3905_4e44_ba3e_134253a38a60.slice/crio-e3bcbd6e4db9f5b293f72e0b7467c349975ad1cd4bcd438ef20e4e49c2ce5564 WatchSource:0}: Error finding container e3bcbd6e4db9f5b293f72e0b7467c349975ad1cd4bcd438ef20e4e49c2ce5564: Status 404 returned error can't find the container with id e3bcbd6e4db9f5b293f72e0b7467c349975ad1cd4bcd438ef20e4e49c2ce5564 Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.102670 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4q9j2"] Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.140511 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e6debfb-0b2d-4e89-b847-25adb9db8887-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.140607 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e6debfb-0b2d-4e89-b847-25adb9db8887-logs\") pod \"glance-default-internal-api-0\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.140633 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72kwn\" (UniqueName: \"kubernetes.io/projected/7e6debfb-0b2d-4e89-b847-25adb9db8887-kube-api-access-72kwn\") pod \"glance-default-internal-api-0\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.140675 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e6debfb-0b2d-4e89-b847-25adb9db8887-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.140902 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.140916 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e6debfb-0b2d-4e89-b847-25adb9db8887-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.140957 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e6debfb-0b2d-4e89-b847-25adb9db8887-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.144057 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rvmkt"] Feb 02 13:18:32 crc kubenswrapper[4955]: W0202 13:18:32.179704 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01fe12bd_03cd_402a_89a5_db886a443423.slice/crio-62e2a98849df0ddf2f32c705756e36133235e0ad218966a8b0464aadd50bcd58 WatchSource:0}: Error finding container 62e2a98849df0ddf2f32c705756e36133235e0ad218966a8b0464aadd50bcd58: Status 404 returned error can't find the container with id 62e2a98849df0ddf2f32c705756e36133235e0ad218966a8b0464aadd50bcd58 Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.242705 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e6debfb-0b2d-4e89-b847-25adb9db8887-logs\") pod \"glance-default-internal-api-0\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.243267 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72kwn\" (UniqueName: \"kubernetes.io/projected/7e6debfb-0b2d-4e89-b847-25adb9db8887-kube-api-access-72kwn\") pod \"glance-default-internal-api-0\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.243461 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e6debfb-0b2d-4e89-b847-25adb9db8887-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.243989 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e6debfb-0b2d-4e89-b847-25adb9db8887-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.244172 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.244433 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e6debfb-0b2d-4e89-b847-25adb9db8887-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.244705 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e6debfb-0b2d-4e89-b847-25adb9db8887-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.245100 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.245683 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e6debfb-0b2d-4e89-b847-25adb9db8887-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.254486 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e6debfb-0b2d-4e89-b847-25adb9db8887-logs\") pod \"glance-default-internal-api-0\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.256711 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e6debfb-0b2d-4e89-b847-25adb9db8887-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.269538 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e6debfb-0b2d-4e89-b847-25adb9db8887-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.300707 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e6debfb-0b2d-4e89-b847-25adb9db8887-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.308597 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72kwn\" (UniqueName: \"kubernetes.io/projected/7e6debfb-0b2d-4e89-b847-25adb9db8887-kube-api-access-72kwn\") pod \"glance-default-internal-api-0\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.313826 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:18:32 crc kubenswrapper[4955]: W0202 13:18:32.348508 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57d8bb1a_17d9_4802_88f8_70d4a5f4343d.slice/crio-5e57d850dbb85578abe407e3d1b21c11a7feae04b931efa240af37d4ab9a5509 WatchSource:0}: Error finding container 5e57d850dbb85578abe407e3d1b21c11a7feae04b931efa240af37d4ab9a5509: Status 404 returned error can't find the container with id 5e57d850dbb85578abe407e3d1b21c11a7feae04b931efa240af37d4ab9a5509 Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.389038 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.483007 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.655512 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.694825 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.694984 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.755611 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.765318 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-config\") pod \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.765392 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb9cb\" (UniqueName: \"kubernetes.io/projected/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-kube-api-access-cb9cb\") pod \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.765512 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-ovsdbserver-sb\") pod \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.765536 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-dns-swift-storage-0\") pod \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.765581 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-ovsdbserver-nb\") pod \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.765613 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-dns-svc\") pod \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\" (UID: \"2cd9a3cb-4c57-435d-86e7-916155f5f1f4\") " Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.790378 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-kube-api-access-cb9cb" (OuterVolumeSpecName: "kube-api-access-cb9cb") pod "2cd9a3cb-4c57-435d-86e7-916155f5f1f4" (UID: "2cd9a3cb-4c57-435d-86e7-916155f5f1f4"). InnerVolumeSpecName "kube-api-access-cb9cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.847489 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2cd9a3cb-4c57-435d-86e7-916155f5f1f4" (UID: "2cd9a3cb-4c57-435d-86e7-916155f5f1f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.871687 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.871710 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb9cb\" (UniqueName: \"kubernetes.io/projected/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-kube-api-access-cb9cb\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.885385 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2cd9a3cb-4c57-435d-86e7-916155f5f1f4" (UID: "2cd9a3cb-4c57-435d-86e7-916155f5f1f4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.885669 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2cd9a3cb-4c57-435d-86e7-916155f5f1f4" (UID: "2cd9a3cb-4c57-435d-86e7-916155f5f1f4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.897154 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2cd9a3cb-4c57-435d-86e7-916155f5f1f4" (UID: "2cd9a3cb-4c57-435d-86e7-916155f5f1f4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.900736 4955 generic.go:334] "Generic (PLEG): container finished" podID="009d1cab-2cab-475b-9722-881450cee4a5" containerID="3293e7f19a636ec85bc0c42981450393440d8f8f0bd95f8d279f16ced7186b65" exitCode=0 Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.900795 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" event={"ID":"009d1cab-2cab-475b-9722-881450cee4a5","Type":"ContainerDied","Data":"3293e7f19a636ec85bc0c42981450393440d8f8f0bd95f8d279f16ced7186b65"} Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.900823 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" event={"ID":"009d1cab-2cab-475b-9722-881450cee4a5","Type":"ContainerStarted","Data":"d652a593f5bd055d0b3c1168231edcccfd8614163b6d045c593e735248c7c236"} Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.932366 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9wmph" event={"ID":"a474135c-7a61-46ee-af96-680f7139539b","Type":"ContainerStarted","Data":"00debb421c851ec26b4de53a8c239f239fdcf9c8c21aa21c822475f375adefc7"} Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.932408 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9wmph" event={"ID":"a474135c-7a61-46ee-af96-680f7139539b","Type":"ContainerStarted","Data":"1c1fe0622ed0125cc70925d129b1661821c9ab2e8cc40ee9d776ff5d9d6bbebd"} Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.936538 4955 generic.go:334] "Generic (PLEG): container finished" podID="7efe6000-a81b-4c97-acf2-a1a02bc80ecc" containerID="220ed7d4a03336c35db32802f5e82d9e8018909fb87d4e5faead0641452b2463" exitCode=0 Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.936624 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" event={"ID":"7efe6000-a81b-4c97-acf2-a1a02bc80ecc","Type":"ContainerDied","Data":"220ed7d4a03336c35db32802f5e82d9e8018909fb87d4e5faead0641452b2463"} Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.945297 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vb8sd" event={"ID":"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4","Type":"ContainerStarted","Data":"c3d07a5f42ffb4a43cc7da94c1c0187ca4894c28d2e58b358a40156e31e24768"} Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.953604 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a510324b-16f3-4585-abc9-ae66997c2987","Type":"ContainerStarted","Data":"47a58a8e70524af6ed532a402b560bbfc3837bd4a0cc952e54d8f2f45c3f8723"} Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.962088 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"57d8bb1a-17d9-4802-88f8-70d4a5f4343d","Type":"ContainerStarted","Data":"5e57d850dbb85578abe407e3d1b21c11a7feae04b931efa240af37d4ab9a5509"} Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.969371 4955 generic.go:334] "Generic (PLEG): container finished" podID="2cd9a3cb-4c57-435d-86e7-916155f5f1f4" containerID="b81edc2768fb86f5a82021a97ebe11a96b20cd1a5d5c74dd154c134cae01841d" exitCode=0 Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.969449 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" event={"ID":"2cd9a3cb-4c57-435d-86e7-916155f5f1f4","Type":"ContainerDied","Data":"b81edc2768fb86f5a82021a97ebe11a96b20cd1a5d5c74dd154c134cae01841d"} Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.969479 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" event={"ID":"2cd9a3cb-4c57-435d-86e7-916155f5f1f4","Type":"ContainerDied","Data":"40d6c72d9435619f548d73d18febceb9ce74d7e6c0651e37358717f5795fa7e0"} Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.969496 4955 scope.go:117] "RemoveContainer" containerID="b81edc2768fb86f5a82021a97ebe11a96b20cd1a5d5c74dd154c134cae01841d" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.969703 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-fmhxp" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.972965 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rvmkt" event={"ID":"01fe12bd-03cd-402a-89a5-db886a443423","Type":"ContainerStarted","Data":"62e2a98849df0ddf2f32c705756e36133235e0ad218966a8b0464aadd50bcd58"} Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.973073 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.973094 4955 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.973104 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.974147 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-config" (OuterVolumeSpecName: "config") pod "2cd9a3cb-4c57-435d-86e7-916155f5f1f4" (UID: "2cd9a3cb-4c57-435d-86e7-916155f5f1f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.982860 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-9wmph" podStartSLOduration=2.982841031 podStartE2EDuration="2.982841031s" podCreationTimestamp="2026-02-02 13:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:18:32.94846633 +0000 UTC m=+963.860802770" watchObservedRunningTime="2026-02-02 13:18:32.982841031 +0000 UTC m=+963.895177481" Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.984593 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4q9j2" event={"ID":"fc8aafab-3905-4e44-ba3e-134253a38a60","Type":"ContainerStarted","Data":"e3bcbd6e4db9f5b293f72e0b7467c349975ad1cd4bcd438ef20e4e49c2ce5564"} Feb 02 13:18:32 crc kubenswrapper[4955]: I0202 13:18:32.992132 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vb8sd" podStartSLOduration=2.992115654 podStartE2EDuration="2.992115654s" podCreationTimestamp="2026-02-02 13:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:18:32.969882018 +0000 UTC m=+963.882218458" watchObservedRunningTime="2026-02-02 13:18:32.992115654 +0000 UTC m=+963.904452104" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.003350 4955 scope.go:117] "RemoveContainer" containerID="175dd5b4cec18355208126369bc922179df03664a5739f5c19aab82b24b28bf0" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.017265 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.017310 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.017350 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.018020 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2cd1b5f598a7c72d423d2d4f07c02704decf6f32b1b11d2eaf56ffcde03b7e1b"} pod="openshift-machine-config-operator/machine-config-daemon-6l62h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.018067 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" containerID="cri-o://2cd1b5f598a7c72d423d2d4f07c02704decf6f32b1b11d2eaf56ffcde03b7e1b" gracePeriod=600 Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.048427 4955 scope.go:117] "RemoveContainer" containerID="b81edc2768fb86f5a82021a97ebe11a96b20cd1a5d5c74dd154c134cae01841d" Feb 02 13:18:33 crc kubenswrapper[4955]: E0202 13:18:33.048837 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b81edc2768fb86f5a82021a97ebe11a96b20cd1a5d5c74dd154c134cae01841d\": container with ID starting with b81edc2768fb86f5a82021a97ebe11a96b20cd1a5d5c74dd154c134cae01841d not found: ID does not exist" containerID="b81edc2768fb86f5a82021a97ebe11a96b20cd1a5d5c74dd154c134cae01841d" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.048867 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b81edc2768fb86f5a82021a97ebe11a96b20cd1a5d5c74dd154c134cae01841d"} err="failed to get container status \"b81edc2768fb86f5a82021a97ebe11a96b20cd1a5d5c74dd154c134cae01841d\": rpc error: code = NotFound desc = could not find container \"b81edc2768fb86f5a82021a97ebe11a96b20cd1a5d5c74dd154c134cae01841d\": container with ID starting with b81edc2768fb86f5a82021a97ebe11a96b20cd1a5d5c74dd154c134cae01841d not found: ID does not exist" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.048886 4955 scope.go:117] "RemoveContainer" containerID="175dd5b4cec18355208126369bc922179df03664a5739f5c19aab82b24b28bf0" Feb 02 13:18:33 crc kubenswrapper[4955]: E0202 13:18:33.049430 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"175dd5b4cec18355208126369bc922179df03664a5739f5c19aab82b24b28bf0\": container with ID starting with 175dd5b4cec18355208126369bc922179df03664a5739f5c19aab82b24b28bf0 not found: ID does not exist" containerID="175dd5b4cec18355208126369bc922179df03664a5739f5c19aab82b24b28bf0" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.049454 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"175dd5b4cec18355208126369bc922179df03664a5739f5c19aab82b24b28bf0"} err="failed to get container status \"175dd5b4cec18355208126369bc922179df03664a5739f5c19aab82b24b28bf0\": rpc error: code = NotFound desc = could not find container \"175dd5b4cec18355208126369bc922179df03664a5739f5c19aab82b24b28bf0\": container with ID starting with 175dd5b4cec18355208126369bc922179df03664a5739f5c19aab82b24b28bf0 not found: ID does not exist" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.076033 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cd9a3cb-4c57-435d-86e7-916155f5f1f4-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.335264 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-fmhxp"] Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.346659 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-fmhxp"] Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.367696 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.538597 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.587260 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7m9s\" (UniqueName: \"kubernetes.io/projected/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-kube-api-access-w7m9s\") pod \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.587346 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-ovsdbserver-sb\") pod \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.587371 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-ovsdbserver-nb\") pod \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.587395 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-dns-svc\") pod \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.587434 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-config\") pod \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.587589 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-dns-swift-storage-0\") pod \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\" (UID: \"7efe6000-a81b-4c97-acf2-a1a02bc80ecc\") " Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.603757 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-kube-api-access-w7m9s" (OuterVolumeSpecName: "kube-api-access-w7m9s") pod "7efe6000-a81b-4c97-acf2-a1a02bc80ecc" (UID: "7efe6000-a81b-4c97-acf2-a1a02bc80ecc"). InnerVolumeSpecName "kube-api-access-w7m9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.616174 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7efe6000-a81b-4c97-acf2-a1a02bc80ecc" (UID: "7efe6000-a81b-4c97-acf2-a1a02bc80ecc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.622832 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-config" (OuterVolumeSpecName: "config") pod "7efe6000-a81b-4c97-acf2-a1a02bc80ecc" (UID: "7efe6000-a81b-4c97-acf2-a1a02bc80ecc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.655776 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7efe6000-a81b-4c97-acf2-a1a02bc80ecc" (UID: "7efe6000-a81b-4c97-acf2-a1a02bc80ecc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.660083 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7efe6000-a81b-4c97-acf2-a1a02bc80ecc" (UID: "7efe6000-a81b-4c97-acf2-a1a02bc80ecc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.680681 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7efe6000-a81b-4c97-acf2-a1a02bc80ecc" (UID: "7efe6000-a81b-4c97-acf2-a1a02bc80ecc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.690803 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7m9s\" (UniqueName: \"kubernetes.io/projected/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-kube-api-access-w7m9s\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.690843 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.690852 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.690862 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.690873 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.690881 4955 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7efe6000-a81b-4c97-acf2-a1a02bc80ecc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.738134 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d1ab56a-84b3-4a03-87ea-93b40760c8bf" path="/var/lib/kubelet/pods/1d1ab56a-84b3-4a03-87ea-93b40760c8bf/volumes" Feb 02 13:18:33 crc kubenswrapper[4955]: I0202 13:18:33.738627 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cd9a3cb-4c57-435d-86e7-916155f5f1f4" path="/var/lib/kubelet/pods/2cd9a3cb-4c57-435d-86e7-916155f5f1f4/volumes" Feb 02 13:18:34 crc kubenswrapper[4955]: I0202 13:18:34.005408 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" event={"ID":"7efe6000-a81b-4c97-acf2-a1a02bc80ecc","Type":"ContainerDied","Data":"355ef4d85c360e6fbaea733a86f5251ed27cfae7aa34f0d066093d7ec0006e3f"} Feb 02 13:18:34 crc kubenswrapper[4955]: I0202 13:18:34.005708 4955 scope.go:117] "RemoveContainer" containerID="220ed7d4a03336c35db32802f5e82d9e8018909fb87d4e5faead0641452b2463" Feb 02 13:18:34 crc kubenswrapper[4955]: I0202 13:18:34.005807 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-7nhr6" Feb 02 13:18:34 crc kubenswrapper[4955]: I0202 13:18:34.022019 4955 generic.go:334] "Generic (PLEG): container finished" podID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerID="2cd1b5f598a7c72d423d2d4f07c02704decf6f32b1b11d2eaf56ffcde03b7e1b" exitCode=0 Feb 02 13:18:34 crc kubenswrapper[4955]: I0202 13:18:34.022065 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerDied","Data":"2cd1b5f598a7c72d423d2d4f07c02704decf6f32b1b11d2eaf56ffcde03b7e1b"} Feb 02 13:18:34 crc kubenswrapper[4955]: I0202 13:18:34.022127 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerStarted","Data":"602bc594f34404ff8d3d47bc3c3720ccccc87bdb99931ff5e26638726c7febe5"} Feb 02 13:18:34 crc kubenswrapper[4955]: I0202 13:18:34.026897 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7e6debfb-0b2d-4e89-b847-25adb9db8887","Type":"ContainerStarted","Data":"fce7773e89b1356136375f74cf70bbd500b377e7a183276b2dd7e82dd48245e1"} Feb 02 13:18:34 crc kubenswrapper[4955]: I0202 13:18:34.067676 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-7nhr6"] Feb 02 13:18:34 crc kubenswrapper[4955]: I0202 13:18:34.083271 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"57d8bb1a-17d9-4802-88f8-70d4a5f4343d","Type":"ContainerStarted","Data":"5ad8d2991301def92f17ba79fd24faed408a3da2135d73c2bb5e107a56a5cccd"} Feb 02 13:18:34 crc kubenswrapper[4955]: I0202 13:18:34.087342 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-7nhr6"] Feb 02 13:18:34 crc kubenswrapper[4955]: I0202 13:18:34.120631 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" event={"ID":"009d1cab-2cab-475b-9722-881450cee4a5","Type":"ContainerStarted","Data":"6d4ead3d5948168fb325a5f4ccbb6ce60ebad36bd6f163e2583b07ab6484754f"} Feb 02 13:18:34 crc kubenswrapper[4955]: I0202 13:18:34.120844 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:18:34 crc kubenswrapper[4955]: I0202 13:18:34.160043 4955 scope.go:117] "RemoveContainer" containerID="c5c24a4cc614a40516e0f58b2e903db33d318f5c1611a533b9d33712af008fe5" Feb 02 13:18:34 crc kubenswrapper[4955]: I0202 13:18:34.167469 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" podStartSLOduration=4.16745114 podStartE2EDuration="4.16745114s" podCreationTimestamp="2026-02-02 13:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:18:34.160296108 +0000 UTC m=+965.072632558" watchObservedRunningTime="2026-02-02 13:18:34.16745114 +0000 UTC m=+965.079787590" Feb 02 13:18:35 crc kubenswrapper[4955]: I0202 13:18:35.173844 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7e6debfb-0b2d-4e89-b847-25adb9db8887","Type":"ContainerStarted","Data":"e3178ae91172910d32d20ef156aa81c5b17dee7dac059e53a21997c0e9e55c8c"} Feb 02 13:18:35 crc kubenswrapper[4955]: I0202 13:18:35.731541 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7efe6000-a81b-4c97-acf2-a1a02bc80ecc" path="/var/lib/kubelet/pods/7efe6000-a81b-4c97-acf2-a1a02bc80ecc/volumes" Feb 02 13:18:36 crc kubenswrapper[4955]: I0202 13:18:36.192073 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7e6debfb-0b2d-4e89-b847-25adb9db8887","Type":"ContainerStarted","Data":"25e44599145685aeb7c33b814c8d0f7666c227224720f062b81cdaff04167a76"} Feb 02 13:18:36 crc kubenswrapper[4955]: I0202 13:18:36.192479 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7e6debfb-0b2d-4e89-b847-25adb9db8887" containerName="glance-log" containerID="cri-o://e3178ae91172910d32d20ef156aa81c5b17dee7dac059e53a21997c0e9e55c8c" gracePeriod=30 Feb 02 13:18:36 crc kubenswrapper[4955]: I0202 13:18:36.193487 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7e6debfb-0b2d-4e89-b847-25adb9db8887" containerName="glance-httpd" containerID="cri-o://25e44599145685aeb7c33b814c8d0f7666c227224720f062b81cdaff04167a76" gracePeriod=30 Feb 02 13:18:36 crc kubenswrapper[4955]: I0202 13:18:36.196173 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"57d8bb1a-17d9-4802-88f8-70d4a5f4343d","Type":"ContainerStarted","Data":"2214eaecfb2c25e68c10875a225868c78d9a5b77b74bcd33e0621b9e8d05b6ac"} Feb 02 13:18:36 crc kubenswrapper[4955]: I0202 13:18:36.196336 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="57d8bb1a-17d9-4802-88f8-70d4a5f4343d" containerName="glance-log" containerID="cri-o://5ad8d2991301def92f17ba79fd24faed408a3da2135d73c2bb5e107a56a5cccd" gracePeriod=30 Feb 02 13:18:36 crc kubenswrapper[4955]: I0202 13:18:36.196465 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="57d8bb1a-17d9-4802-88f8-70d4a5f4343d" containerName="glance-httpd" containerID="cri-o://2214eaecfb2c25e68c10875a225868c78d9a5b77b74bcd33e0621b9e8d05b6ac" gracePeriod=30 Feb 02 13:18:36 crc kubenswrapper[4955]: I0202 13:18:36.235246 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.235229399 podStartE2EDuration="6.235229399s" podCreationTimestamp="2026-02-02 13:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:18:36.229858059 +0000 UTC m=+967.142194499" watchObservedRunningTime="2026-02-02 13:18:36.235229399 +0000 UTC m=+967.147565849" Feb 02 13:18:36 crc kubenswrapper[4955]: I0202 13:18:36.282269 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.282249764 podStartE2EDuration="6.282249764s" podCreationTimestamp="2026-02-02 13:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:18:36.281009304 +0000 UTC m=+967.193345754" watchObservedRunningTime="2026-02-02 13:18:36.282249764 +0000 UTC m=+967.194586214" Feb 02 13:18:36 crc kubenswrapper[4955]: I0202 13:18:36.517190 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hpct9" Feb 02 13:18:36 crc kubenswrapper[4955]: I0202 13:18:36.581229 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hpct9" Feb 02 13:18:36 crc kubenswrapper[4955]: I0202 13:18:36.751894 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hpct9"] Feb 02 13:18:37 crc kubenswrapper[4955]: I0202 13:18:37.213581 4955 generic.go:334] "Generic (PLEG): container finished" podID="7e6debfb-0b2d-4e89-b847-25adb9db8887" containerID="25e44599145685aeb7c33b814c8d0f7666c227224720f062b81cdaff04167a76" exitCode=0 Feb 02 13:18:37 crc kubenswrapper[4955]: I0202 13:18:37.214196 4955 generic.go:334] "Generic (PLEG): container finished" podID="7e6debfb-0b2d-4e89-b847-25adb9db8887" containerID="e3178ae91172910d32d20ef156aa81c5b17dee7dac059e53a21997c0e9e55c8c" exitCode=143 Feb 02 13:18:37 crc kubenswrapper[4955]: I0202 13:18:37.213594 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7e6debfb-0b2d-4e89-b847-25adb9db8887","Type":"ContainerDied","Data":"25e44599145685aeb7c33b814c8d0f7666c227224720f062b81cdaff04167a76"} Feb 02 13:18:37 crc kubenswrapper[4955]: I0202 13:18:37.214328 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7e6debfb-0b2d-4e89-b847-25adb9db8887","Type":"ContainerDied","Data":"e3178ae91172910d32d20ef156aa81c5b17dee7dac059e53a21997c0e9e55c8c"} Feb 02 13:18:37 crc kubenswrapper[4955]: I0202 13:18:37.217037 4955 generic.go:334] "Generic (PLEG): container finished" podID="57d8bb1a-17d9-4802-88f8-70d4a5f4343d" containerID="2214eaecfb2c25e68c10875a225868c78d9a5b77b74bcd33e0621b9e8d05b6ac" exitCode=0 Feb 02 13:18:37 crc kubenswrapper[4955]: I0202 13:18:37.217073 4955 generic.go:334] "Generic (PLEG): container finished" podID="57d8bb1a-17d9-4802-88f8-70d4a5f4343d" containerID="5ad8d2991301def92f17ba79fd24faed408a3da2135d73c2bb5e107a56a5cccd" exitCode=143 Feb 02 13:18:37 crc kubenswrapper[4955]: I0202 13:18:37.217101 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"57d8bb1a-17d9-4802-88f8-70d4a5f4343d","Type":"ContainerDied","Data":"2214eaecfb2c25e68c10875a225868c78d9a5b77b74bcd33e0621b9e8d05b6ac"} Feb 02 13:18:37 crc kubenswrapper[4955]: I0202 13:18:37.217133 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"57d8bb1a-17d9-4802-88f8-70d4a5f4343d","Type":"ContainerDied","Data":"5ad8d2991301def92f17ba79fd24faed408a3da2135d73c2bb5e107a56a5cccd"} Feb 02 13:18:37 crc kubenswrapper[4955]: I0202 13:18:37.219144 4955 generic.go:334] "Generic (PLEG): container finished" podID="48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4" containerID="c3d07a5f42ffb4a43cc7da94c1c0187ca4894c28d2e58b358a40156e31e24768" exitCode=0 Feb 02 13:18:37 crc kubenswrapper[4955]: I0202 13:18:37.219202 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vb8sd" event={"ID":"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4","Type":"ContainerDied","Data":"c3d07a5f42ffb4a43cc7da94c1c0187ca4894c28d2e58b358a40156e31e24768"} Feb 02 13:18:38 crc kubenswrapper[4955]: I0202 13:18:38.230832 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hpct9" podUID="5c399cd3-81fb-4625-8d18-ce2bb6c0b72e" containerName="registry-server" containerID="cri-o://b6e3136bd7ac69f17d571d695108858b9c71721ea0f036c8d18b819d73072a14" gracePeriod=2 Feb 02 13:18:39 crc kubenswrapper[4955]: I0202 13:18:39.241815 4955 generic.go:334] "Generic (PLEG): container finished" podID="5c399cd3-81fb-4625-8d18-ce2bb6c0b72e" containerID="b6e3136bd7ac69f17d571d695108858b9c71721ea0f036c8d18b819d73072a14" exitCode=0 Feb 02 13:18:39 crc kubenswrapper[4955]: I0202 13:18:39.242018 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpct9" event={"ID":"5c399cd3-81fb-4625-8d18-ce2bb6c0b72e","Type":"ContainerDied","Data":"b6e3136bd7ac69f17d571d695108858b9c71721ea0f036c8d18b819d73072a14"} Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.227972 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.272138 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7e6debfb-0b2d-4e89-b847-25adb9db8887","Type":"ContainerDied","Data":"fce7773e89b1356136375f74cf70bbd500b377e7a183276b2dd7e82dd48245e1"} Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.272187 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.272197 4955 scope.go:117] "RemoveContainer" containerID="25e44599145685aeb7c33b814c8d0f7666c227224720f062b81cdaff04167a76" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.332359 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e6debfb-0b2d-4e89-b847-25adb9db8887-combined-ca-bundle\") pod \"7e6debfb-0b2d-4e89-b847-25adb9db8887\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.333210 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e6debfb-0b2d-4e89-b847-25adb9db8887-httpd-run\") pod \"7e6debfb-0b2d-4e89-b847-25adb9db8887\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.333281 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e6debfb-0b2d-4e89-b847-25adb9db8887-scripts\") pod \"7e6debfb-0b2d-4e89-b847-25adb9db8887\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.333418 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72kwn\" (UniqueName: \"kubernetes.io/projected/7e6debfb-0b2d-4e89-b847-25adb9db8887-kube-api-access-72kwn\") pod \"7e6debfb-0b2d-4e89-b847-25adb9db8887\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.333477 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"7e6debfb-0b2d-4e89-b847-25adb9db8887\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.333511 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e6debfb-0b2d-4e89-b847-25adb9db8887-config-data\") pod \"7e6debfb-0b2d-4e89-b847-25adb9db8887\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.333640 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e6debfb-0b2d-4e89-b847-25adb9db8887-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7e6debfb-0b2d-4e89-b847-25adb9db8887" (UID: "7e6debfb-0b2d-4e89-b847-25adb9db8887"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.333589 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e6debfb-0b2d-4e89-b847-25adb9db8887-logs\") pod \"7e6debfb-0b2d-4e89-b847-25adb9db8887\" (UID: \"7e6debfb-0b2d-4e89-b847-25adb9db8887\") " Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.333961 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e6debfb-0b2d-4e89-b847-25adb9db8887-logs" (OuterVolumeSpecName: "logs") pod "7e6debfb-0b2d-4e89-b847-25adb9db8887" (UID: "7e6debfb-0b2d-4e89-b847-25adb9db8887"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.334316 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e6debfb-0b2d-4e89-b847-25adb9db8887-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.334334 4955 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e6debfb-0b2d-4e89-b847-25adb9db8887-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.338200 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "7e6debfb-0b2d-4e89-b847-25adb9db8887" (UID: "7e6debfb-0b2d-4e89-b847-25adb9db8887"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.339830 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e6debfb-0b2d-4e89-b847-25adb9db8887-scripts" (OuterVolumeSpecName: "scripts") pod "7e6debfb-0b2d-4e89-b847-25adb9db8887" (UID: "7e6debfb-0b2d-4e89-b847-25adb9db8887"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.340146 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e6debfb-0b2d-4e89-b847-25adb9db8887-kube-api-access-72kwn" (OuterVolumeSpecName: "kube-api-access-72kwn") pod "7e6debfb-0b2d-4e89-b847-25adb9db8887" (UID: "7e6debfb-0b2d-4e89-b847-25adb9db8887"). InnerVolumeSpecName "kube-api-access-72kwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.434719 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e6debfb-0b2d-4e89-b847-25adb9db8887-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e6debfb-0b2d-4e89-b847-25adb9db8887" (UID: "7e6debfb-0b2d-4e89-b847-25adb9db8887"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.436877 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e6debfb-0b2d-4e89-b847-25adb9db8887-config-data" (OuterVolumeSpecName: "config-data") pod "7e6debfb-0b2d-4e89-b847-25adb9db8887" (UID: "7e6debfb-0b2d-4e89-b847-25adb9db8887"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.437237 4955 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.437274 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e6debfb-0b2d-4e89-b847-25adb9db8887-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.437286 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e6debfb-0b2d-4e89-b847-25adb9db8887-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.437302 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e6debfb-0b2d-4e89-b847-25adb9db8887-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.437313 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72kwn\" (UniqueName: \"kubernetes.io/projected/7e6debfb-0b2d-4e89-b847-25adb9db8887-kube-api-access-72kwn\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.474291 4955 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.539892 4955 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.631980 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.657032 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.681002 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:18:40 crc kubenswrapper[4955]: E0202 13:18:40.681420 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd9a3cb-4c57-435d-86e7-916155f5f1f4" containerName="dnsmasq-dns" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.681445 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd9a3cb-4c57-435d-86e7-916155f5f1f4" containerName="dnsmasq-dns" Feb 02 13:18:40 crc kubenswrapper[4955]: E0202 13:18:40.681460 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6debfb-0b2d-4e89-b847-25adb9db8887" containerName="glance-log" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.681467 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6debfb-0b2d-4e89-b847-25adb9db8887" containerName="glance-log" Feb 02 13:18:40 crc kubenswrapper[4955]: E0202 13:18:40.681474 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd9a3cb-4c57-435d-86e7-916155f5f1f4" containerName="init" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.681479 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd9a3cb-4c57-435d-86e7-916155f5f1f4" containerName="init" Feb 02 13:18:40 crc kubenswrapper[4955]: E0202 13:18:40.681490 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7efe6000-a81b-4c97-acf2-a1a02bc80ecc" containerName="init" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.681496 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="7efe6000-a81b-4c97-acf2-a1a02bc80ecc" containerName="init" Feb 02 13:18:40 crc kubenswrapper[4955]: E0202 13:18:40.681512 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6debfb-0b2d-4e89-b847-25adb9db8887" containerName="glance-httpd" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.681517 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6debfb-0b2d-4e89-b847-25adb9db8887" containerName="glance-httpd" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.681743 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="7efe6000-a81b-4c97-acf2-a1a02bc80ecc" containerName="init" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.681768 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e6debfb-0b2d-4e89-b847-25adb9db8887" containerName="glance-httpd" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.681779 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e6debfb-0b2d-4e89-b847-25adb9db8887" containerName="glance-log" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.681793 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd9a3cb-4c57-435d-86e7-916155f5f1f4" containerName="dnsmasq-dns" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.682769 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.689709 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.691297 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.750271 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.750442 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.750477 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrllc\" (UniqueName: \"kubernetes.io/projected/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-kube-api-access-nrllc\") pod \"glance-default-internal-api-0\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.750526 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.750553 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.750585 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-logs\") pod \"glance-default-internal-api-0\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.750712 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.853922 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.854018 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.854041 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrllc\" (UniqueName: \"kubernetes.io/projected/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-kube-api-access-nrllc\") pod \"glance-default-internal-api-0\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.854071 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.854119 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.854148 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-logs\") pod \"glance-default-internal-api-0\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.854180 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.854846 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.856969 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-logs\") pod \"glance-default-internal-api-0\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.856998 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.859236 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.859587 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.866888 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.872793 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrllc\" (UniqueName: \"kubernetes.io/projected/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-kube-api-access-nrllc\") pod \"glance-default-internal-api-0\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:40 crc kubenswrapper[4955]: I0202 13:18:40.882597 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:18:41 crc kubenswrapper[4955]: I0202 13:18:41.008389 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:18:41 crc kubenswrapper[4955]: I0202 13:18:41.083546 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:18:41 crc kubenswrapper[4955]: I0202 13:18:41.404828 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:18:41 crc kubenswrapper[4955]: I0202 13:18:41.473893 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-fw7r5"] Feb 02 13:18:41 crc kubenswrapper[4955]: I0202 13:18:41.474921 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-fw7r5" podUID="2486e70c-d87b-4e5c-bb3b-19d55ebbf622" containerName="dnsmasq-dns" containerID="cri-o://fc6ef9128cdac0aee35acb9360fc01f76aec0f9db5d779397b17097938364ea2" gracePeriod=10 Feb 02 13:18:41 crc kubenswrapper[4955]: I0202 13:18:41.731984 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e6debfb-0b2d-4e89-b847-25adb9db8887" path="/var/lib/kubelet/pods/7e6debfb-0b2d-4e89-b847-25adb9db8887/volumes" Feb 02 13:18:42 crc kubenswrapper[4955]: I0202 13:18:42.318922 4955 generic.go:334] "Generic (PLEG): container finished" podID="2486e70c-d87b-4e5c-bb3b-19d55ebbf622" containerID="fc6ef9128cdac0aee35acb9360fc01f76aec0f9db5d779397b17097938364ea2" exitCode=0 Feb 02 13:18:42 crc kubenswrapper[4955]: I0202 13:18:42.318980 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-fw7r5" event={"ID":"2486e70c-d87b-4e5c-bb3b-19d55ebbf622","Type":"ContainerDied","Data":"fc6ef9128cdac0aee35acb9360fc01f76aec0f9db5d779397b17097938364ea2"} Feb 02 13:18:43 crc kubenswrapper[4955]: I0202 13:18:43.658020 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-fw7r5" podUID="2486e70c-d87b-4e5c-bb3b-19d55ebbf622" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.751678 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vb8sd" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.758681 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.842826 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-config-data\") pod \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.844018 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxkrd\" (UniqueName: \"kubernetes.io/projected/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-kube-api-access-hxkrd\") pod \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.844062 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-logs\") pod \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.844101 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-config-data\") pod \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.844525 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-logs" (OuterVolumeSpecName: "logs") pod "57d8bb1a-17d9-4802-88f8-70d4a5f4343d" (UID: "57d8bb1a-17d9-4802-88f8-70d4a5f4343d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.844627 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-combined-ca-bundle\") pod \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.844677 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-scripts\") pod \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.844693 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-combined-ca-bundle\") pod \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.844991 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.845087 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-scripts\") pod \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.845123 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-fernet-keys\") pod \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.845154 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qklm4\" (UniqueName: \"kubernetes.io/projected/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-kube-api-access-qklm4\") pod \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.845171 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-httpd-run\") pod \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\" (UID: \"57d8bb1a-17d9-4802-88f8-70d4a5f4343d\") " Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.845205 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-credential-keys\") pod \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\" (UID: \"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4\") " Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.846284 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.846288 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "57d8bb1a-17d9-4802-88f8-70d4a5f4343d" (UID: "57d8bb1a-17d9-4802-88f8-70d4a5f4343d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.849388 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-scripts" (OuterVolumeSpecName: "scripts") pod "48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4" (UID: "48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.850084 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-kube-api-access-hxkrd" (OuterVolumeSpecName: "kube-api-access-hxkrd") pod "57d8bb1a-17d9-4802-88f8-70d4a5f4343d" (UID: "57d8bb1a-17d9-4802-88f8-70d4a5f4343d"). InnerVolumeSpecName "kube-api-access-hxkrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.850091 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "57d8bb1a-17d9-4802-88f8-70d4a5f4343d" (UID: "57d8bb1a-17d9-4802-88f8-70d4a5f4343d"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.851318 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4" (UID: "48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.851631 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-kube-api-access-qklm4" (OuterVolumeSpecName: "kube-api-access-qklm4") pod "48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4" (UID: "48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4"). InnerVolumeSpecName "kube-api-access-qklm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.851782 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4" (UID: "48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.862343 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-scripts" (OuterVolumeSpecName: "scripts") pod "57d8bb1a-17d9-4802-88f8-70d4a5f4343d" (UID: "57d8bb1a-17d9-4802-88f8-70d4a5f4343d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.870778 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4" (UID: "48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.871385 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-config-data" (OuterVolumeSpecName: "config-data") pod "48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4" (UID: "48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.872123 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57d8bb1a-17d9-4802-88f8-70d4a5f4343d" (UID: "57d8bb1a-17d9-4802-88f8-70d4a5f4343d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.892908 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-config-data" (OuterVolumeSpecName: "config-data") pod "57d8bb1a-17d9-4802-88f8-70d4a5f4343d" (UID: "57d8bb1a-17d9-4802-88f8-70d4a5f4343d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.948390 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.948456 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.948521 4955 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.948534 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.948545 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qklm4\" (UniqueName: \"kubernetes.io/projected/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-kube-api-access-qklm4\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.948594 4955 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.948608 4955 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.948618 4955 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.948628 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.948637 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxkrd\" (UniqueName: \"kubernetes.io/projected/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-kube-api-access-hxkrd\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.948671 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.948686 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57d8bb1a-17d9-4802-88f8-70d4a5f4343d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:45 crc kubenswrapper[4955]: I0202 13:18:45.965814 4955 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.050684 4955 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.362851 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vb8sd" event={"ID":"48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4","Type":"ContainerDied","Data":"cdd52a6988c154cc1d5fe8008d15767cdad6e44a0b4d34c2dd4a80ec87631ff2"} Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.362923 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdd52a6988c154cc1d5fe8008d15767cdad6e44a0b4d34c2dd4a80ec87631ff2" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.362973 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vb8sd" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.365934 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"57d8bb1a-17d9-4802-88f8-70d4a5f4343d","Type":"ContainerDied","Data":"5e57d850dbb85578abe407e3d1b21c11a7feae04b931efa240af37d4ab9a5509"} Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.365999 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.423355 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.437921 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.448508 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:18:46 crc kubenswrapper[4955]: E0202 13:18:46.448855 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4" containerName="keystone-bootstrap" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.448868 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4" containerName="keystone-bootstrap" Feb 02 13:18:46 crc kubenswrapper[4955]: E0202 13:18:46.448879 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57d8bb1a-17d9-4802-88f8-70d4a5f4343d" containerName="glance-log" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.448886 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="57d8bb1a-17d9-4802-88f8-70d4a5f4343d" containerName="glance-log" Feb 02 13:18:46 crc kubenswrapper[4955]: E0202 13:18:46.448909 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57d8bb1a-17d9-4802-88f8-70d4a5f4343d" containerName="glance-httpd" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.448917 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="57d8bb1a-17d9-4802-88f8-70d4a5f4343d" containerName="glance-httpd" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.449063 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="57d8bb1a-17d9-4802-88f8-70d4a5f4343d" containerName="glance-log" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.449086 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4" containerName="keystone-bootstrap" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.449104 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="57d8bb1a-17d9-4802-88f8-70d4a5f4343d" containerName="glance-httpd" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.450081 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: E0202 13:18:46.454910 4955 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6e3136bd7ac69f17d571d695108858b9c71721ea0f036c8d18b819d73072a14 is running failed: container process not found" containerID="b6e3136bd7ac69f17d571d695108858b9c71721ea0f036c8d18b819d73072a14" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.455107 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 13:18:46 crc kubenswrapper[4955]: E0202 13:18:46.455515 4955 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6e3136bd7ac69f17d571d695108858b9c71721ea0f036c8d18b819d73072a14 is running failed: container process not found" containerID="b6e3136bd7ac69f17d571d695108858b9c71721ea0f036c8d18b819d73072a14" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 13:18:46 crc kubenswrapper[4955]: E0202 13:18:46.458630 4955 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6e3136bd7ac69f17d571d695108858b9c71721ea0f036c8d18b819d73072a14 is running failed: container process not found" containerID="b6e3136bd7ac69f17d571d695108858b9c71721ea0f036c8d18b819d73072a14" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 13:18:46 crc kubenswrapper[4955]: E0202 13:18:46.458667 4955 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6e3136bd7ac69f17d571d695108858b9c71721ea0f036c8d18b819d73072a14 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-hpct9" podUID="5c399cd3-81fb-4625-8d18-ce2bb6c0b72e" containerName="registry-server" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.459095 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.459242 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.559611 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.559676 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-config-data\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.559762 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39986812-66de-430e-a32f-95242971ddc6-logs\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.559810 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/39986812-66de-430e-a32f-95242971ddc6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.559832 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.559882 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-scripts\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.559915 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.559975 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9sh2\" (UniqueName: \"kubernetes.io/projected/39986812-66de-430e-a32f-95242971ddc6-kube-api-access-z9sh2\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.662187 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39986812-66de-430e-a32f-95242971ddc6-logs\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.662241 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/39986812-66de-430e-a32f-95242971ddc6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.662261 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.662284 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-scripts\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.662323 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.662356 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9sh2\" (UniqueName: \"kubernetes.io/projected/39986812-66de-430e-a32f-95242971ddc6-kube-api-access-z9sh2\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.662385 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.662415 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-config-data\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.662467 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.662620 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39986812-66de-430e-a32f-95242971ddc6-logs\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.662829 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/39986812-66de-430e-a32f-95242971ddc6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.667284 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-scripts\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.668060 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-config-data\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.669609 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.672409 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.694061 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9sh2\" (UniqueName: \"kubernetes.io/projected/39986812-66de-430e-a32f-95242971ddc6-kube-api-access-z9sh2\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.704940 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.779333 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.847598 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vb8sd"] Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.861161 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vb8sd"] Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.935722 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wjnkc"] Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.937056 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wjnkc" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.941843 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.942121 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.942272 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.942786 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.943902 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jnmgd" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.947376 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wjnkc"] Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.966359 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-credential-keys\") pod \"keystone-bootstrap-wjnkc\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " pod="openstack/keystone-bootstrap-wjnkc" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.966430 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-config-data\") pod \"keystone-bootstrap-wjnkc\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " pod="openstack/keystone-bootstrap-wjnkc" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.966476 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjrmx\" (UniqueName: \"kubernetes.io/projected/8727b892-9204-4236-9e54-80af117730db-kube-api-access-mjrmx\") pod \"keystone-bootstrap-wjnkc\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " pod="openstack/keystone-bootstrap-wjnkc" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.966618 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-combined-ca-bundle\") pod \"keystone-bootstrap-wjnkc\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " pod="openstack/keystone-bootstrap-wjnkc" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.966651 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-fernet-keys\") pod \"keystone-bootstrap-wjnkc\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " pod="openstack/keystone-bootstrap-wjnkc" Feb 02 13:18:46 crc kubenswrapper[4955]: I0202 13:18:46.966675 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-scripts\") pod \"keystone-bootstrap-wjnkc\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " pod="openstack/keystone-bootstrap-wjnkc" Feb 02 13:18:47 crc kubenswrapper[4955]: I0202 13:18:47.067866 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjrmx\" (UniqueName: \"kubernetes.io/projected/8727b892-9204-4236-9e54-80af117730db-kube-api-access-mjrmx\") pod \"keystone-bootstrap-wjnkc\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " pod="openstack/keystone-bootstrap-wjnkc" Feb 02 13:18:47 crc kubenswrapper[4955]: I0202 13:18:47.068187 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-combined-ca-bundle\") pod \"keystone-bootstrap-wjnkc\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " pod="openstack/keystone-bootstrap-wjnkc" Feb 02 13:18:47 crc kubenswrapper[4955]: I0202 13:18:47.068221 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-fernet-keys\") pod \"keystone-bootstrap-wjnkc\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " pod="openstack/keystone-bootstrap-wjnkc" Feb 02 13:18:47 crc kubenswrapper[4955]: I0202 13:18:47.068276 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-scripts\") pod \"keystone-bootstrap-wjnkc\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " pod="openstack/keystone-bootstrap-wjnkc" Feb 02 13:18:47 crc kubenswrapper[4955]: I0202 13:18:47.068368 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-credential-keys\") pod \"keystone-bootstrap-wjnkc\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " pod="openstack/keystone-bootstrap-wjnkc" Feb 02 13:18:47 crc kubenswrapper[4955]: I0202 13:18:47.068437 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-config-data\") pod \"keystone-bootstrap-wjnkc\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " pod="openstack/keystone-bootstrap-wjnkc" Feb 02 13:18:47 crc kubenswrapper[4955]: I0202 13:18:47.071817 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-scripts\") pod \"keystone-bootstrap-wjnkc\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " pod="openstack/keystone-bootstrap-wjnkc" Feb 02 13:18:47 crc kubenswrapper[4955]: I0202 13:18:47.072027 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-config-data\") pod \"keystone-bootstrap-wjnkc\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " pod="openstack/keystone-bootstrap-wjnkc" Feb 02 13:18:47 crc kubenswrapper[4955]: I0202 13:18:47.072205 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-fernet-keys\") pod \"keystone-bootstrap-wjnkc\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " pod="openstack/keystone-bootstrap-wjnkc" Feb 02 13:18:47 crc kubenswrapper[4955]: I0202 13:18:47.089966 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-combined-ca-bundle\") pod \"keystone-bootstrap-wjnkc\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " pod="openstack/keystone-bootstrap-wjnkc" Feb 02 13:18:47 crc kubenswrapper[4955]: I0202 13:18:47.093089 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-credential-keys\") pod \"keystone-bootstrap-wjnkc\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " pod="openstack/keystone-bootstrap-wjnkc" Feb 02 13:18:47 crc kubenswrapper[4955]: I0202 13:18:47.093319 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjrmx\" (UniqueName: \"kubernetes.io/projected/8727b892-9204-4236-9e54-80af117730db-kube-api-access-mjrmx\") pod \"keystone-bootstrap-wjnkc\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " pod="openstack/keystone-bootstrap-wjnkc" Feb 02 13:18:47 crc kubenswrapper[4955]: I0202 13:18:47.255059 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wjnkc" Feb 02 13:18:47 crc kubenswrapper[4955]: I0202 13:18:47.726200 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4" path="/var/lib/kubelet/pods/48ae2c5f-ba17-44d3-ab77-8a6bfff9bfb4/volumes" Feb 02 13:18:47 crc kubenswrapper[4955]: I0202 13:18:47.726779 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57d8bb1a-17d9-4802-88f8-70d4a5f4343d" path="/var/lib/kubelet/pods/57d8bb1a-17d9-4802-88f8-70d4a5f4343d/volumes" Feb 02 13:18:48 crc kubenswrapper[4955]: I0202 13:18:48.066733 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpct9" Feb 02 13:18:48 crc kubenswrapper[4955]: I0202 13:18:48.188724 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c399cd3-81fb-4625-8d18-ce2bb6c0b72e-utilities\") pod \"5c399cd3-81fb-4625-8d18-ce2bb6c0b72e\" (UID: \"5c399cd3-81fb-4625-8d18-ce2bb6c0b72e\") " Feb 02 13:18:48 crc kubenswrapper[4955]: I0202 13:18:48.188886 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snxnk\" (UniqueName: \"kubernetes.io/projected/5c399cd3-81fb-4625-8d18-ce2bb6c0b72e-kube-api-access-snxnk\") pod \"5c399cd3-81fb-4625-8d18-ce2bb6c0b72e\" (UID: \"5c399cd3-81fb-4625-8d18-ce2bb6c0b72e\") " Feb 02 13:18:48 crc kubenswrapper[4955]: I0202 13:18:48.188995 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c399cd3-81fb-4625-8d18-ce2bb6c0b72e-catalog-content\") pod \"5c399cd3-81fb-4625-8d18-ce2bb6c0b72e\" (UID: \"5c399cd3-81fb-4625-8d18-ce2bb6c0b72e\") " Feb 02 13:18:48 crc kubenswrapper[4955]: I0202 13:18:48.189593 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c399cd3-81fb-4625-8d18-ce2bb6c0b72e-utilities" (OuterVolumeSpecName: "utilities") pod "5c399cd3-81fb-4625-8d18-ce2bb6c0b72e" (UID: "5c399cd3-81fb-4625-8d18-ce2bb6c0b72e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:18:48 crc kubenswrapper[4955]: I0202 13:18:48.194816 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c399cd3-81fb-4625-8d18-ce2bb6c0b72e-kube-api-access-snxnk" (OuterVolumeSpecName: "kube-api-access-snxnk") pod "5c399cd3-81fb-4625-8d18-ce2bb6c0b72e" (UID: "5c399cd3-81fb-4625-8d18-ce2bb6c0b72e"). InnerVolumeSpecName "kube-api-access-snxnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:18:48 crc kubenswrapper[4955]: I0202 13:18:48.288685 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c399cd3-81fb-4625-8d18-ce2bb6c0b72e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c399cd3-81fb-4625-8d18-ce2bb6c0b72e" (UID: "5c399cd3-81fb-4625-8d18-ce2bb6c0b72e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:18:48 crc kubenswrapper[4955]: I0202 13:18:48.291180 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snxnk\" (UniqueName: \"kubernetes.io/projected/5c399cd3-81fb-4625-8d18-ce2bb6c0b72e-kube-api-access-snxnk\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:48 crc kubenswrapper[4955]: I0202 13:18:48.291213 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c399cd3-81fb-4625-8d18-ce2bb6c0b72e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:48 crc kubenswrapper[4955]: I0202 13:18:48.291223 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c399cd3-81fb-4625-8d18-ce2bb6c0b72e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:48 crc kubenswrapper[4955]: I0202 13:18:48.386373 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpct9" event={"ID":"5c399cd3-81fb-4625-8d18-ce2bb6c0b72e","Type":"ContainerDied","Data":"2fb83f6923219d9f7d9868184232e917db24dc15eb53afcce2c9b31abbdc135a"} Feb 02 13:18:48 crc kubenswrapper[4955]: I0202 13:18:48.386785 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpct9" Feb 02 13:18:48 crc kubenswrapper[4955]: I0202 13:18:48.428587 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hpct9"] Feb 02 13:18:48 crc kubenswrapper[4955]: I0202 13:18:48.437675 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hpct9"] Feb 02 13:18:49 crc kubenswrapper[4955]: I0202 13:18:49.730828 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c399cd3-81fb-4625-8d18-ce2bb6c0b72e" path="/var/lib/kubelet/pods/5c399cd3-81fb-4625-8d18-ce2bb6c0b72e/volumes" Feb 02 13:18:53 crc kubenswrapper[4955]: I0202 13:18:53.657959 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-fw7r5" podUID="2486e70c-d87b-4e5c-bb3b-19d55ebbf622" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: i/o timeout" Feb 02 13:18:55 crc kubenswrapper[4955]: I0202 13:18:55.856818 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-fw7r5" Feb 02 13:18:56 crc kubenswrapper[4955]: I0202 13:18:56.014968 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-ovsdbserver-sb\") pod \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\" (UID: \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\") " Feb 02 13:18:56 crc kubenswrapper[4955]: I0202 13:18:56.015263 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-ovsdbserver-nb\") pod \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\" (UID: \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\") " Feb 02 13:18:56 crc kubenswrapper[4955]: I0202 13:18:56.015408 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-dns-svc\") pod \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\" (UID: \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\") " Feb 02 13:18:56 crc kubenswrapper[4955]: I0202 13:18:56.015476 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-config\") pod \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\" (UID: \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\") " Feb 02 13:18:56 crc kubenswrapper[4955]: I0202 13:18:56.015506 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4hfz\" (UniqueName: \"kubernetes.io/projected/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-kube-api-access-n4hfz\") pod \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\" (UID: \"2486e70c-d87b-4e5c-bb3b-19d55ebbf622\") " Feb 02 13:18:56 crc kubenswrapper[4955]: I0202 13:18:56.021046 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-kube-api-access-n4hfz" (OuterVolumeSpecName: "kube-api-access-n4hfz") pod "2486e70c-d87b-4e5c-bb3b-19d55ebbf622" (UID: "2486e70c-d87b-4e5c-bb3b-19d55ebbf622"). InnerVolumeSpecName "kube-api-access-n4hfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:18:56 crc kubenswrapper[4955]: I0202 13:18:56.059173 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2486e70c-d87b-4e5c-bb3b-19d55ebbf622" (UID: "2486e70c-d87b-4e5c-bb3b-19d55ebbf622"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:56 crc kubenswrapper[4955]: I0202 13:18:56.059866 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2486e70c-d87b-4e5c-bb3b-19d55ebbf622" (UID: "2486e70c-d87b-4e5c-bb3b-19d55ebbf622"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:56 crc kubenswrapper[4955]: I0202 13:18:56.059950 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-config" (OuterVolumeSpecName: "config") pod "2486e70c-d87b-4e5c-bb3b-19d55ebbf622" (UID: "2486e70c-d87b-4e5c-bb3b-19d55ebbf622"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:56 crc kubenswrapper[4955]: I0202 13:18:56.060421 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2486e70c-d87b-4e5c-bb3b-19d55ebbf622" (UID: "2486e70c-d87b-4e5c-bb3b-19d55ebbf622"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:18:56 crc kubenswrapper[4955]: I0202 13:18:56.117403 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:56 crc kubenswrapper[4955]: I0202 13:18:56.117441 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:56 crc kubenswrapper[4955]: I0202 13:18:56.117452 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4hfz\" (UniqueName: \"kubernetes.io/projected/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-kube-api-access-n4hfz\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:56 crc kubenswrapper[4955]: I0202 13:18:56.117463 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:56 crc kubenswrapper[4955]: I0202 13:18:56.117471 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2486e70c-d87b-4e5c-bb3b-19d55ebbf622-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:56 crc kubenswrapper[4955]: E0202 13:18:56.285158 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 02 13:18:56 crc kubenswrapper[4955]: E0202 13:18:56.285569 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p767d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-4q9j2_openstack(fc8aafab-3905-4e44-ba3e-134253a38a60): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:18:56 crc kubenswrapper[4955]: I0202 13:18:56.286014 4955 scope.go:117] "RemoveContainer" containerID="e3178ae91172910d32d20ef156aa81c5b17dee7dac059e53a21997c0e9e55c8c" Feb 02 13:18:56 crc kubenswrapper[4955]: E0202 13:18:56.286703 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-4q9j2" podUID="fc8aafab-3905-4e44-ba3e-134253a38a60" Feb 02 13:18:56 crc kubenswrapper[4955]: I0202 13:18:56.459806 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-fw7r5" event={"ID":"2486e70c-d87b-4e5c-bb3b-19d55ebbf622","Type":"ContainerDied","Data":"d2d35c3a2a667ef36b8a7f2851868cac3907a783b0d884149598626c37a8730e"} Feb 02 13:18:56 crc kubenswrapper[4955]: I0202 13:18:56.459897 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-fw7r5" Feb 02 13:18:56 crc kubenswrapper[4955]: E0202 13:18:56.468795 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-4q9j2" podUID="fc8aafab-3905-4e44-ba3e-134253a38a60" Feb 02 13:18:56 crc kubenswrapper[4955]: I0202 13:18:56.502315 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-fw7r5"] Feb 02 13:18:56 crc kubenswrapper[4955]: I0202 13:18:56.508107 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-fw7r5"] Feb 02 13:18:57 crc kubenswrapper[4955]: I0202 13:18:57.291310 4955 scope.go:117] "RemoveContainer" containerID="2214eaecfb2c25e68c10875a225868c78d9a5b77b74bcd33e0621b9e8d05b6ac" Feb 02 13:18:57 crc kubenswrapper[4955]: E0202 13:18:57.319487 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 02 13:18:57 crc kubenswrapper[4955]: E0202 13:18:57.319675 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2nmh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-wjvr2_openstack(462f37c8-5909-418b-bf1f-58af764957ab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:18:57 crc kubenswrapper[4955]: E0202 13:18:57.321972 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-wjvr2" podUID="462f37c8-5909-418b-bf1f-58af764957ab" Feb 02 13:18:57 crc kubenswrapper[4955]: I0202 13:18:57.496475 4955 scope.go:117] "RemoveContainer" containerID="5ad8d2991301def92f17ba79fd24faed408a3da2135d73c2bb5e107a56a5cccd" Feb 02 13:18:57 crc kubenswrapper[4955]: E0202 13:18:57.496745 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-wjvr2" podUID="462f37c8-5909-418b-bf1f-58af764957ab" Feb 02 13:18:57 crc kubenswrapper[4955]: I0202 13:18:57.540389 4955 scope.go:117] "RemoveContainer" containerID="b6e3136bd7ac69f17d571d695108858b9c71721ea0f036c8d18b819d73072a14" Feb 02 13:18:57 crc kubenswrapper[4955]: I0202 13:18:57.584008 4955 scope.go:117] "RemoveContainer" containerID="b01e8940e3675f606bc437b031aee89c1c8992b77db4002151ebe698c9a0ab40" Feb 02 13:18:57 crc kubenswrapper[4955]: I0202 13:18:57.634215 4955 scope.go:117] "RemoveContainer" containerID="9d38c30f4539e1e834a176291ccf2b36a806b8144cad8e4dcc251766caf418c2" Feb 02 13:18:57 crc kubenswrapper[4955]: I0202 13:18:57.662012 4955 scope.go:117] "RemoveContainer" containerID="fc6ef9128cdac0aee35acb9360fc01f76aec0f9db5d779397b17097938364ea2" Feb 02 13:18:57 crc kubenswrapper[4955]: I0202 13:18:57.685992 4955 scope.go:117] "RemoveContainer" containerID="156cc03f6970af3fc82db1fcf3453e7d77217661c2b831ef64225214ce8e14ea" Feb 02 13:18:57 crc kubenswrapper[4955]: I0202 13:18:57.739832 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2486e70c-d87b-4e5c-bb3b-19d55ebbf622" path="/var/lib/kubelet/pods/2486e70c-d87b-4e5c-bb3b-19d55ebbf622/volumes" Feb 02 13:18:57 crc kubenswrapper[4955]: I0202 13:18:57.888408 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:18:57 crc kubenswrapper[4955]: I0202 13:18:57.902235 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wjnkc"] Feb 02 13:18:57 crc kubenswrapper[4955]: I0202 13:18:57.981038 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:18:58 crc kubenswrapper[4955]: I0202 13:18:58.502908 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rvmkt" event={"ID":"01fe12bd-03cd-402a-89a5-db886a443423","Type":"ContainerStarted","Data":"c2a6004a394285d858013a2b0d9bde8c131dc4196ac43ddfd67ad2080add130e"} Feb 02 13:18:58 crc kubenswrapper[4955]: I0202 13:18:58.507470 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a510324b-16f3-4585-abc9-ae66997c2987","Type":"ContainerStarted","Data":"0fa0e8a211764f1a621af9c25b87737c29a0e60393abf27277a89be3b8a56953"} Feb 02 13:18:58 crc kubenswrapper[4955]: I0202 13:18:58.510676 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wjnkc" event={"ID":"8727b892-9204-4236-9e54-80af117730db","Type":"ContainerStarted","Data":"2a7d51fff74b22d2db13c175da538155113d9a3c971ea37921b2ef2328ca62b4"} Feb 02 13:18:58 crc kubenswrapper[4955]: I0202 13:18:58.510711 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wjnkc" event={"ID":"8727b892-9204-4236-9e54-80af117730db","Type":"ContainerStarted","Data":"225a333b1b3cca52be375e0d6ab2562b5b3d592576c0f4bb6fe764610a73a5e5"} Feb 02 13:18:58 crc kubenswrapper[4955]: I0202 13:18:58.514791 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jdrff" event={"ID":"72b66af4-aa8a-4739-8ed1-d55f066b5505","Type":"ContainerStarted","Data":"ace3d1ee29967a36470f8e3b9837d86b184440d57d8a834133cc9b1f0c2f717d"} Feb 02 13:18:58 crc kubenswrapper[4955]: I0202 13:18:58.517173 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6","Type":"ContainerStarted","Data":"aec045ebda5d0a260372a2973270828c9f629111034696afe40df4b159908aec"} Feb 02 13:18:58 crc kubenswrapper[4955]: I0202 13:18:58.517214 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6","Type":"ContainerStarted","Data":"051044815cc75cc5ccfe0f80a4ae53d6c3f11fc81e9c8ed05a72d953aa73a6ba"} Feb 02 13:18:58 crc kubenswrapper[4955]: I0202 13:18:58.520277 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"39986812-66de-430e-a32f-95242971ddc6","Type":"ContainerStarted","Data":"685f5307fe47fb5d48d4280ab312e6ecb49751dd0a106ab42b266e29d40038c1"} Feb 02 13:18:58 crc kubenswrapper[4955]: I0202 13:18:58.533724 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-rvmkt" podStartSLOduration=4.449574458 podStartE2EDuration="28.533703811s" podCreationTimestamp="2026-02-02 13:18:30 +0000 UTC" firstStartedPulling="2026-02-02 13:18:32.189333937 +0000 UTC m=+963.101670387" lastFinishedPulling="2026-02-02 13:18:56.27346329 +0000 UTC m=+987.185799740" observedRunningTime="2026-02-02 13:18:58.528672269 +0000 UTC m=+989.441008719" watchObservedRunningTime="2026-02-02 13:18:58.533703811 +0000 UTC m=+989.446040281" Feb 02 13:18:58 crc kubenswrapper[4955]: I0202 13:18:58.558925 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-jdrff" podStartSLOduration=2.691142427 podStartE2EDuration="28.558901268s" podCreationTimestamp="2026-02-02 13:18:30 +0000 UTC" firstStartedPulling="2026-02-02 13:18:31.410894057 +0000 UTC m=+962.323230507" lastFinishedPulling="2026-02-02 13:18:57.278652898 +0000 UTC m=+988.190989348" observedRunningTime="2026-02-02 13:18:58.551466708 +0000 UTC m=+989.463803178" watchObservedRunningTime="2026-02-02 13:18:58.558901268 +0000 UTC m=+989.471237718" Feb 02 13:18:58 crc kubenswrapper[4955]: I0202 13:18:58.659517 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-fw7r5" podUID="2486e70c-d87b-4e5c-bb3b-19d55ebbf622" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: i/o timeout" Feb 02 13:18:59 crc kubenswrapper[4955]: I0202 13:18:59.529863 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a510324b-16f3-4585-abc9-ae66997c2987","Type":"ContainerStarted","Data":"cde878752e8a45393e592132225b1cd2ca435b8c4a23cbbbff193d2dceb3d78d"} Feb 02 13:18:59 crc kubenswrapper[4955]: I0202 13:18:59.533267 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"39986812-66de-430e-a32f-95242971ddc6","Type":"ContainerStarted","Data":"965cdc793093de7895abb18c5143614f7bc787de6e41c8bb1d77a7a85cd49c52"} Feb 02 13:18:59 crc kubenswrapper[4955]: I0202 13:18:59.533417 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"39986812-66de-430e-a32f-95242971ddc6","Type":"ContainerStarted","Data":"4dd195906a79b457df40eec250a447bc000ffedbad646e88caba8037efa32a1d"} Feb 02 13:18:59 crc kubenswrapper[4955]: I0202 13:18:59.537138 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b67b3fc1-0724-4b79-ae1e-f884ad1a34b6" containerName="glance-log" containerID="cri-o://aec045ebda5d0a260372a2973270828c9f629111034696afe40df4b159908aec" gracePeriod=30 Feb 02 13:18:59 crc kubenswrapper[4955]: I0202 13:18:59.537521 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6","Type":"ContainerStarted","Data":"66e4425429e716dfacfbfb29d34b9843dc165583696c5a8a3a0b90ca1fb8cea7"} Feb 02 13:18:59 crc kubenswrapper[4955]: I0202 13:18:59.537820 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b67b3fc1-0724-4b79-ae1e-f884ad1a34b6" containerName="glance-httpd" containerID="cri-o://66e4425429e716dfacfbfb29d34b9843dc165583696c5a8a3a0b90ca1fb8cea7" gracePeriod=30 Feb 02 13:18:59 crc kubenswrapper[4955]: I0202 13:18:59.569696 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=13.569680239 podStartE2EDuration="13.569680239s" podCreationTimestamp="2026-02-02 13:18:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:18:59.565403086 +0000 UTC m=+990.477739536" watchObservedRunningTime="2026-02-02 13:18:59.569680239 +0000 UTC m=+990.482016689" Feb 02 13:18:59 crc kubenswrapper[4955]: I0202 13:18:59.570345 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wjnkc" podStartSLOduration=13.570335676 podStartE2EDuration="13.570335676s" podCreationTimestamp="2026-02-02 13:18:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:18:58.57431148 +0000 UTC m=+989.486647930" watchObservedRunningTime="2026-02-02 13:18:59.570335676 +0000 UTC m=+990.482672126" Feb 02 13:18:59 crc kubenswrapper[4955]: I0202 13:18:59.594779 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=19.594756504 podStartE2EDuration="19.594756504s" podCreationTimestamp="2026-02-02 13:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:18:59.589771404 +0000 UTC m=+990.502107854" watchObservedRunningTime="2026-02-02 13:18:59.594756504 +0000 UTC m=+990.507092954" Feb 02 13:19:00 crc kubenswrapper[4955]: I0202 13:19:00.549079 4955 generic.go:334] "Generic (PLEG): container finished" podID="b67b3fc1-0724-4b79-ae1e-f884ad1a34b6" containerID="66e4425429e716dfacfbfb29d34b9843dc165583696c5a8a3a0b90ca1fb8cea7" exitCode=0 Feb 02 13:19:00 crc kubenswrapper[4955]: I0202 13:19:00.549375 4955 generic.go:334] "Generic (PLEG): container finished" podID="b67b3fc1-0724-4b79-ae1e-f884ad1a34b6" containerID="aec045ebda5d0a260372a2973270828c9f629111034696afe40df4b159908aec" exitCode=143 Feb 02 13:19:00 crc kubenswrapper[4955]: I0202 13:19:00.549153 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6","Type":"ContainerDied","Data":"66e4425429e716dfacfbfb29d34b9843dc165583696c5a8a3a0b90ca1fb8cea7"} Feb 02 13:19:00 crc kubenswrapper[4955]: I0202 13:19:00.549492 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6","Type":"ContainerDied","Data":"aec045ebda5d0a260372a2973270828c9f629111034696afe40df4b159908aec"} Feb 02 13:19:00 crc kubenswrapper[4955]: I0202 13:19:00.879452 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:19:00 crc kubenswrapper[4955]: I0202 13:19:00.922239 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-httpd-run\") pod \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " Feb 02 13:19:00 crc kubenswrapper[4955]: I0202 13:19:00.922393 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrllc\" (UniqueName: \"kubernetes.io/projected/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-kube-api-access-nrllc\") pod \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " Feb 02 13:19:00 crc kubenswrapper[4955]: I0202 13:19:00.922488 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-scripts\") pod \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " Feb 02 13:19:00 crc kubenswrapper[4955]: I0202 13:19:00.922525 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-logs\") pod \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " Feb 02 13:19:00 crc kubenswrapper[4955]: I0202 13:19:00.922638 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-config-data\") pod \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " Feb 02 13:19:00 crc kubenswrapper[4955]: I0202 13:19:00.922683 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-combined-ca-bundle\") pod \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " Feb 02 13:19:00 crc kubenswrapper[4955]: I0202 13:19:00.922739 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\" (UID: \"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6\") " Feb 02 13:19:00 crc kubenswrapper[4955]: I0202 13:19:00.925877 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-logs" (OuterVolumeSpecName: "logs") pod "b67b3fc1-0724-4b79-ae1e-f884ad1a34b6" (UID: "b67b3fc1-0724-4b79-ae1e-f884ad1a34b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:19:00 crc kubenswrapper[4955]: I0202 13:19:00.929571 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b67b3fc1-0724-4b79-ae1e-f884ad1a34b6" (UID: "b67b3fc1-0724-4b79-ae1e-f884ad1a34b6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:19:00 crc kubenswrapper[4955]: I0202 13:19:00.930170 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-kube-api-access-nrllc" (OuterVolumeSpecName: "kube-api-access-nrllc") pod "b67b3fc1-0724-4b79-ae1e-f884ad1a34b6" (UID: "b67b3fc1-0724-4b79-ae1e-f884ad1a34b6"). InnerVolumeSpecName "kube-api-access-nrllc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:19:00 crc kubenswrapper[4955]: I0202 13:19:00.933605 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-scripts" (OuterVolumeSpecName: "scripts") pod "b67b3fc1-0724-4b79-ae1e-f884ad1a34b6" (UID: "b67b3fc1-0724-4b79-ae1e-f884ad1a34b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:00 crc kubenswrapper[4955]: I0202 13:19:00.940456 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "b67b3fc1-0724-4b79-ae1e-f884ad1a34b6" (UID: "b67b3fc1-0724-4b79-ae1e-f884ad1a34b6"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 13:19:00 crc kubenswrapper[4955]: I0202 13:19:00.985873 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b67b3fc1-0724-4b79-ae1e-f884ad1a34b6" (UID: "b67b3fc1-0724-4b79-ae1e-f884ad1a34b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.029659 4955 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.029705 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrllc\" (UniqueName: \"kubernetes.io/projected/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-kube-api-access-nrllc\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.029721 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.029732 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.029741 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.029767 4955 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.053238 4955 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.065914 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-config-data" (OuterVolumeSpecName: "config-data") pod "b67b3fc1-0724-4b79-ae1e-f884ad1a34b6" (UID: "b67b3fc1-0724-4b79-ae1e-f884ad1a34b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.131709 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.131991 4955 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.569045 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b67b3fc1-0724-4b79-ae1e-f884ad1a34b6","Type":"ContainerDied","Data":"051044815cc75cc5ccfe0f80a4ae53d6c3f11fc81e9c8ed05a72d953aa73a6ba"} Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.569104 4955 scope.go:117] "RemoveContainer" containerID="66e4425429e716dfacfbfb29d34b9843dc165583696c5a8a3a0b90ca1fb8cea7" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.569237 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.597020 4955 generic.go:334] "Generic (PLEG): container finished" podID="01fe12bd-03cd-402a-89a5-db886a443423" containerID="c2a6004a394285d858013a2b0d9bde8c131dc4196ac43ddfd67ad2080add130e" exitCode=0 Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.597073 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rvmkt" event={"ID":"01fe12bd-03cd-402a-89a5-db886a443423","Type":"ContainerDied","Data":"c2a6004a394285d858013a2b0d9bde8c131dc4196ac43ddfd67ad2080add130e"} Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.625627 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.633670 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.644777 4955 scope.go:117] "RemoveContainer" containerID="aec045ebda5d0a260372a2973270828c9f629111034696afe40df4b159908aec" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.664698 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:19:01 crc kubenswrapper[4955]: E0202 13:19:01.665197 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2486e70c-d87b-4e5c-bb3b-19d55ebbf622" containerName="dnsmasq-dns" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.665214 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2486e70c-d87b-4e5c-bb3b-19d55ebbf622" containerName="dnsmasq-dns" Feb 02 13:19:01 crc kubenswrapper[4955]: E0202 13:19:01.665237 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b67b3fc1-0724-4b79-ae1e-f884ad1a34b6" containerName="glance-log" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.665244 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="b67b3fc1-0724-4b79-ae1e-f884ad1a34b6" containerName="glance-log" Feb 02 13:19:01 crc kubenswrapper[4955]: E0202 13:19:01.665265 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c399cd3-81fb-4625-8d18-ce2bb6c0b72e" containerName="registry-server" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.665274 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c399cd3-81fb-4625-8d18-ce2bb6c0b72e" containerName="registry-server" Feb 02 13:19:01 crc kubenswrapper[4955]: E0202 13:19:01.665289 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b67b3fc1-0724-4b79-ae1e-f884ad1a34b6" containerName="glance-httpd" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.665296 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="b67b3fc1-0724-4b79-ae1e-f884ad1a34b6" containerName="glance-httpd" Feb 02 13:19:01 crc kubenswrapper[4955]: E0202 13:19:01.665307 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c399cd3-81fb-4625-8d18-ce2bb6c0b72e" containerName="extract-content" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.665314 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c399cd3-81fb-4625-8d18-ce2bb6c0b72e" containerName="extract-content" Feb 02 13:19:01 crc kubenswrapper[4955]: E0202 13:19:01.665323 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c399cd3-81fb-4625-8d18-ce2bb6c0b72e" containerName="extract-utilities" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.665331 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c399cd3-81fb-4625-8d18-ce2bb6c0b72e" containerName="extract-utilities" Feb 02 13:19:01 crc kubenswrapper[4955]: E0202 13:19:01.665345 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2486e70c-d87b-4e5c-bb3b-19d55ebbf622" containerName="init" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.665353 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2486e70c-d87b-4e5c-bb3b-19d55ebbf622" containerName="init" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.665606 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c399cd3-81fb-4625-8d18-ce2bb6c0b72e" containerName="registry-server" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.665626 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="b67b3fc1-0724-4b79-ae1e-f884ad1a34b6" containerName="glance-httpd" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.665639 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="2486e70c-d87b-4e5c-bb3b-19d55ebbf622" containerName="dnsmasq-dns" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.665649 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="b67b3fc1-0724-4b79-ae1e-f884ad1a34b6" containerName="glance-log" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.666787 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.672136 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.672352 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.679050 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.732928 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b67b3fc1-0724-4b79-ae1e-f884ad1a34b6" path="/var/lib/kubelet/pods/b67b3fc1-0724-4b79-ae1e-f884ad1a34b6/volumes" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.746409 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.746522 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.746584 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckp7g\" (UniqueName: \"kubernetes.io/projected/6d3ca271-8968-4a48-a1a5-be53a4038119-kube-api-access-ckp7g\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.746617 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.746641 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.746677 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.746706 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d3ca271-8968-4a48-a1a5-be53a4038119-logs\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.746760 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d3ca271-8968-4a48-a1a5-be53a4038119-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.848381 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d3ca271-8968-4a48-a1a5-be53a4038119-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.848496 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.848542 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.848599 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckp7g\" (UniqueName: \"kubernetes.io/projected/6d3ca271-8968-4a48-a1a5-be53a4038119-kube-api-access-ckp7g\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.848630 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.848662 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.848692 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.848717 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d3ca271-8968-4a48-a1a5-be53a4038119-logs\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.848791 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.849238 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d3ca271-8968-4a48-a1a5-be53a4038119-logs\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.850007 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d3ca271-8968-4a48-a1a5-be53a4038119-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.854208 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.854497 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.860688 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.860707 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.869490 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckp7g\" (UniqueName: \"kubernetes.io/projected/6d3ca271-8968-4a48-a1a5-be53a4038119-kube-api-access-ckp7g\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:01 crc kubenswrapper[4955]: I0202 13:19:01.889505 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:02 crc kubenswrapper[4955]: I0202 13:19:02.007527 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:19:02 crc kubenswrapper[4955]: I0202 13:19:02.579824 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:19:02 crc kubenswrapper[4955]: I0202 13:19:02.610997 4955 generic.go:334] "Generic (PLEG): container finished" podID="72b66af4-aa8a-4739-8ed1-d55f066b5505" containerID="ace3d1ee29967a36470f8e3b9837d86b184440d57d8a834133cc9b1f0c2f717d" exitCode=0 Feb 02 13:19:02 crc kubenswrapper[4955]: I0202 13:19:02.611082 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jdrff" event={"ID":"72b66af4-aa8a-4739-8ed1-d55f066b5505","Type":"ContainerDied","Data":"ace3d1ee29967a36470f8e3b9837d86b184440d57d8a834133cc9b1f0c2f717d"} Feb 02 13:19:02 crc kubenswrapper[4955]: I0202 13:19:02.614592 4955 generic.go:334] "Generic (PLEG): container finished" podID="a474135c-7a61-46ee-af96-680f7139539b" containerID="00debb421c851ec26b4de53a8c239f239fdcf9c8c21aa21c822475f375adefc7" exitCode=0 Feb 02 13:19:02 crc kubenswrapper[4955]: I0202 13:19:02.614705 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9wmph" event={"ID":"a474135c-7a61-46ee-af96-680f7139539b","Type":"ContainerDied","Data":"00debb421c851ec26b4de53a8c239f239fdcf9c8c21aa21c822475f375adefc7"} Feb 02 13:19:02 crc kubenswrapper[4955]: I0202 13:19:02.619424 4955 generic.go:334] "Generic (PLEG): container finished" podID="8727b892-9204-4236-9e54-80af117730db" containerID="2a7d51fff74b22d2db13c175da538155113d9a3c971ea37921b2ef2328ca62b4" exitCode=0 Feb 02 13:19:02 crc kubenswrapper[4955]: I0202 13:19:02.619517 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wjnkc" event={"ID":"8727b892-9204-4236-9e54-80af117730db","Type":"ContainerDied","Data":"2a7d51fff74b22d2db13c175da538155113d9a3c971ea37921b2ef2328ca62b4"} Feb 02 13:19:05 crc kubenswrapper[4955]: W0202 13:19:05.382990 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d3ca271_8968_4a48_a1a5_be53a4038119.slice/crio-5787e93b03e11252c5c913a55f6d4366b57671de698acbbf6646fca8c470dab0 WatchSource:0}: Error finding container 5787e93b03e11252c5c913a55f6d4366b57671de698acbbf6646fca8c470dab0: Status 404 returned error can't find the container with id 5787e93b03e11252c5c913a55f6d4366b57671de698acbbf6646fca8c470dab0 Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.621975 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jdrff" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.629166 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rvmkt" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.640235 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9wmph" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.658936 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wjnkc" event={"ID":"8727b892-9204-4236-9e54-80af117730db","Type":"ContainerDied","Data":"225a333b1b3cca52be375e0d6ab2562b5b3d592576c0f4bb6fe764610a73a5e5"} Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.658975 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="225a333b1b3cca52be375e0d6ab2562b5b3d592576c0f4bb6fe764610a73a5e5" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.660974 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6d3ca271-8968-4a48-a1a5-be53a4038119","Type":"ContainerStarted","Data":"5787e93b03e11252c5c913a55f6d4366b57671de698acbbf6646fca8c470dab0"} Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.664023 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jdrff" event={"ID":"72b66af4-aa8a-4739-8ed1-d55f066b5505","Type":"ContainerDied","Data":"fe9cb1cbef57253b557390b477f744085bfe6293e98130b1dd9c62ab229163a8"} Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.664062 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe9cb1cbef57253b557390b477f744085bfe6293e98130b1dd9c62ab229163a8" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.664150 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jdrff" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.680340 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wjnkc" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.680502 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9wmph" event={"ID":"a474135c-7a61-46ee-af96-680f7139539b","Type":"ContainerDied","Data":"1c1fe0622ed0125cc70925d129b1661821c9ab2e8cc40ee9d776ff5d9d6bbebd"} Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.680524 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c1fe0622ed0125cc70925d129b1661821c9ab2e8cc40ee9d776ff5d9d6bbebd" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.680583 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9wmph" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.686064 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rvmkt" event={"ID":"01fe12bd-03cd-402a-89a5-db886a443423","Type":"ContainerDied","Data":"62e2a98849df0ddf2f32c705756e36133235e0ad218966a8b0464aadd50bcd58"} Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.686149 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62e2a98849df0ddf2f32c705756e36133235e0ad218966a8b0464aadd50bcd58" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.686289 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rvmkt" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.817029 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01fe12bd-03cd-402a-89a5-db886a443423-combined-ca-bundle\") pod \"01fe12bd-03cd-402a-89a5-db886a443423\" (UID: \"01fe12bd-03cd-402a-89a5-db886a443423\") " Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.817406 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-config-data\") pod \"8727b892-9204-4236-9e54-80af117730db\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.817434 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-combined-ca-bundle\") pod \"8727b892-9204-4236-9e54-80af117730db\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.817470 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a474135c-7a61-46ee-af96-680f7139539b-combined-ca-bundle\") pod \"a474135c-7a61-46ee-af96-680f7139539b\" (UID: \"a474135c-7a61-46ee-af96-680f7139539b\") " Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.817610 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01fe12bd-03cd-402a-89a5-db886a443423-logs\") pod \"01fe12bd-03cd-402a-89a5-db886a443423\" (UID: \"01fe12bd-03cd-402a-89a5-db886a443423\") " Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.817643 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-fernet-keys\") pod \"8727b892-9204-4236-9e54-80af117730db\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.817669 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b66af4-aa8a-4739-8ed1-d55f066b5505-config-data\") pod \"72b66af4-aa8a-4739-8ed1-d55f066b5505\" (UID: \"72b66af4-aa8a-4739-8ed1-d55f066b5505\") " Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.817693 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-credential-keys\") pod \"8727b892-9204-4236-9e54-80af117730db\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.817713 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjrmx\" (UniqueName: \"kubernetes.io/projected/8727b892-9204-4236-9e54-80af117730db-kube-api-access-mjrmx\") pod \"8727b892-9204-4236-9e54-80af117730db\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.817745 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b66af4-aa8a-4739-8ed1-d55f066b5505-combined-ca-bundle\") pod \"72b66af4-aa8a-4739-8ed1-d55f066b5505\" (UID: \"72b66af4-aa8a-4739-8ed1-d55f066b5505\") " Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.817770 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7fq4\" (UniqueName: \"kubernetes.io/projected/72b66af4-aa8a-4739-8ed1-d55f066b5505-kube-api-access-z7fq4\") pod \"72b66af4-aa8a-4739-8ed1-d55f066b5505\" (UID: \"72b66af4-aa8a-4739-8ed1-d55f066b5505\") " Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.817968 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01fe12bd-03cd-402a-89a5-db886a443423-logs" (OuterVolumeSpecName: "logs") pod "01fe12bd-03cd-402a-89a5-db886a443423" (UID: "01fe12bd-03cd-402a-89a5-db886a443423"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.818354 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a474135c-7a61-46ee-af96-680f7139539b-config\") pod \"a474135c-7a61-46ee-af96-680f7139539b\" (UID: \"a474135c-7a61-46ee-af96-680f7139539b\") " Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.818434 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-scripts\") pod \"8727b892-9204-4236-9e54-80af117730db\" (UID: \"8727b892-9204-4236-9e54-80af117730db\") " Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.818462 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhhc5\" (UniqueName: \"kubernetes.io/projected/01fe12bd-03cd-402a-89a5-db886a443423-kube-api-access-rhhc5\") pod \"01fe12bd-03cd-402a-89a5-db886a443423\" (UID: \"01fe12bd-03cd-402a-89a5-db886a443423\") " Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.818635 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01fe12bd-03cd-402a-89a5-db886a443423-config-data\") pod \"01fe12bd-03cd-402a-89a5-db886a443423\" (UID: \"01fe12bd-03cd-402a-89a5-db886a443423\") " Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.818671 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01fe12bd-03cd-402a-89a5-db886a443423-scripts\") pod \"01fe12bd-03cd-402a-89a5-db886a443423\" (UID: \"01fe12bd-03cd-402a-89a5-db886a443423\") " Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.818726 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf9tc\" (UniqueName: \"kubernetes.io/projected/a474135c-7a61-46ee-af96-680f7139539b-kube-api-access-qf9tc\") pod \"a474135c-7a61-46ee-af96-680f7139539b\" (UID: \"a474135c-7a61-46ee-af96-680f7139539b\") " Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.819339 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01fe12bd-03cd-402a-89a5-db886a443423-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.821193 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8727b892-9204-4236-9e54-80af117730db" (UID: "8727b892-9204-4236-9e54-80af117730db"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.822183 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8727b892-9204-4236-9e54-80af117730db" (UID: "8727b892-9204-4236-9e54-80af117730db"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.825115 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72b66af4-aa8a-4739-8ed1-d55f066b5505-kube-api-access-z7fq4" (OuterVolumeSpecName: "kube-api-access-z7fq4") pod "72b66af4-aa8a-4739-8ed1-d55f066b5505" (UID: "72b66af4-aa8a-4739-8ed1-d55f066b5505"). InnerVolumeSpecName "kube-api-access-z7fq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.833573 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-scripts" (OuterVolumeSpecName: "scripts") pod "8727b892-9204-4236-9e54-80af117730db" (UID: "8727b892-9204-4236-9e54-80af117730db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.834004 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01fe12bd-03cd-402a-89a5-db886a443423-scripts" (OuterVolumeSpecName: "scripts") pod "01fe12bd-03cd-402a-89a5-db886a443423" (UID: "01fe12bd-03cd-402a-89a5-db886a443423"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.834841 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8727b892-9204-4236-9e54-80af117730db-kube-api-access-mjrmx" (OuterVolumeSpecName: "kube-api-access-mjrmx") pod "8727b892-9204-4236-9e54-80af117730db" (UID: "8727b892-9204-4236-9e54-80af117730db"). InnerVolumeSpecName "kube-api-access-mjrmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.837278 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a474135c-7a61-46ee-af96-680f7139539b-kube-api-access-qf9tc" (OuterVolumeSpecName: "kube-api-access-qf9tc") pod "a474135c-7a61-46ee-af96-680f7139539b" (UID: "a474135c-7a61-46ee-af96-680f7139539b"). InnerVolumeSpecName "kube-api-access-qf9tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.849725 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01fe12bd-03cd-402a-89a5-db886a443423-kube-api-access-rhhc5" (OuterVolumeSpecName: "kube-api-access-rhhc5") pod "01fe12bd-03cd-402a-89a5-db886a443423" (UID: "01fe12bd-03cd-402a-89a5-db886a443423"). InnerVolumeSpecName "kube-api-access-rhhc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.865331 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a474135c-7a61-46ee-af96-680f7139539b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a474135c-7a61-46ee-af96-680f7139539b" (UID: "a474135c-7a61-46ee-af96-680f7139539b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.865391 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01fe12bd-03cd-402a-89a5-db886a443423-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01fe12bd-03cd-402a-89a5-db886a443423" (UID: "01fe12bd-03cd-402a-89a5-db886a443423"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.869192 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a474135c-7a61-46ee-af96-680f7139539b-config" (OuterVolumeSpecName: "config") pod "a474135c-7a61-46ee-af96-680f7139539b" (UID: "a474135c-7a61-46ee-af96-680f7139539b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.869363 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01fe12bd-03cd-402a-89a5-db886a443423-config-data" (OuterVolumeSpecName: "config-data") pod "01fe12bd-03cd-402a-89a5-db886a443423" (UID: "01fe12bd-03cd-402a-89a5-db886a443423"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.883098 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b66af4-aa8a-4739-8ed1-d55f066b5505-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72b66af4-aa8a-4739-8ed1-d55f066b5505" (UID: "72b66af4-aa8a-4739-8ed1-d55f066b5505"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.883704 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8727b892-9204-4236-9e54-80af117730db" (UID: "8727b892-9204-4236-9e54-80af117730db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.885838 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-config-data" (OuterVolumeSpecName: "config-data") pod "8727b892-9204-4236-9e54-80af117730db" (UID: "8727b892-9204-4236-9e54-80af117730db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.897511 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b66af4-aa8a-4739-8ed1-d55f066b5505-config-data" (OuterVolumeSpecName: "config-data") pod "72b66af4-aa8a-4739-8ed1-d55f066b5505" (UID: "72b66af4-aa8a-4739-8ed1-d55f066b5505"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.920069 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01fe12bd-03cd-402a-89a5-db886a443423-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.920097 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.920107 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.920116 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a474135c-7a61-46ee-af96-680f7139539b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.920123 4955 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.920131 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b66af4-aa8a-4739-8ed1-d55f066b5505-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.920139 4955 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.920148 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjrmx\" (UniqueName: \"kubernetes.io/projected/8727b892-9204-4236-9e54-80af117730db-kube-api-access-mjrmx\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.920158 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b66af4-aa8a-4739-8ed1-d55f066b5505-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.920167 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7fq4\" (UniqueName: \"kubernetes.io/projected/72b66af4-aa8a-4739-8ed1-d55f066b5505-kube-api-access-z7fq4\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.920174 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a474135c-7a61-46ee-af96-680f7139539b-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.920182 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8727b892-9204-4236-9e54-80af117730db-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.920190 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhhc5\" (UniqueName: \"kubernetes.io/projected/01fe12bd-03cd-402a-89a5-db886a443423-kube-api-access-rhhc5\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.920198 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01fe12bd-03cd-402a-89a5-db886a443423-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.920206 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01fe12bd-03cd-402a-89a5-db886a443423-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:05 crc kubenswrapper[4955]: I0202 13:19:05.920215 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf9tc\" (UniqueName: \"kubernetes.io/projected/a474135c-7a61-46ee-af96-680f7139539b-kube-api-access-qf9tc\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.698088 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a510324b-16f3-4585-abc9-ae66997c2987","Type":"ContainerStarted","Data":"2f9c9c7d4200fa74fca09274cb3e024bbb889e2e1ee166d4f9f5fc15f61ec777"} Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.699991 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wjnkc" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.700112 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6d3ca271-8968-4a48-a1a5-be53a4038119","Type":"ContainerStarted","Data":"f5dfba988f2e284ee4ed01ac1576ab7a510fe8949503b30f3d4961d1d8ea0464"} Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.700252 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6d3ca271-8968-4a48-a1a5-be53a4038119","Type":"ContainerStarted","Data":"0391500e009eaf107990fd8329b2872b76890f0bce585e0b2f28ad0f937f6de5"} Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.756261 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.756241343 podStartE2EDuration="5.756241343s" podCreationTimestamp="2026-02-02 13:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:19:06.743024714 +0000 UTC m=+997.655361164" watchObservedRunningTime="2026-02-02 13:19:06.756241343 +0000 UTC m=+997.668577793" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.781669 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.781725 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.863440 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-66b697b848-h98fb"] Feb 02 13:19:06 crc kubenswrapper[4955]: E0202 13:19:06.863917 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a474135c-7a61-46ee-af96-680f7139539b" containerName="neutron-db-sync" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.863932 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a474135c-7a61-46ee-af96-680f7139539b" containerName="neutron-db-sync" Feb 02 13:19:06 crc kubenswrapper[4955]: E0202 13:19:06.863946 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8727b892-9204-4236-9e54-80af117730db" containerName="keystone-bootstrap" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.863952 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="8727b892-9204-4236-9e54-80af117730db" containerName="keystone-bootstrap" Feb 02 13:19:06 crc kubenswrapper[4955]: E0202 13:19:06.863972 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01fe12bd-03cd-402a-89a5-db886a443423" containerName="placement-db-sync" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.863979 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="01fe12bd-03cd-402a-89a5-db886a443423" containerName="placement-db-sync" Feb 02 13:19:06 crc kubenswrapper[4955]: E0202 13:19:06.863988 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72b66af4-aa8a-4739-8ed1-d55f066b5505" containerName="heat-db-sync" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.863994 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="72b66af4-aa8a-4739-8ed1-d55f066b5505" containerName="heat-db-sync" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.864163 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="8727b892-9204-4236-9e54-80af117730db" containerName="keystone-bootstrap" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.864179 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="72b66af4-aa8a-4739-8ed1-d55f066b5505" containerName="heat-db-sync" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.864194 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="a474135c-7a61-46ee-af96-680f7139539b" containerName="neutron-db-sync" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.864207 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="01fe12bd-03cd-402a-89a5-db886a443423" containerName="placement-db-sync" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.865105 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.867908 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.869502 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.870712 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.871085 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.876255 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.876346 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.876655 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-7t2ql" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.896551 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-79d6d6d4df-7xvpd"] Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.897928 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.915398 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.915474 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.915864 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.916015 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jnmgd" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.916531 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.916746 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.939701 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-66b697b848-h98fb"] Feb 02 13:19:06 crc kubenswrapper[4955]: I0202 13:19:06.957874 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-79d6d6d4df-7xvpd"] Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.005378 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-fbrpk"] Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.007160 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.041133 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-fbrpk"] Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.046411 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71251c83-5af4-4373-8a77-0522fcda630e-internal-tls-certs\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.046462 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88662989-310c-4dc2-9f6e-26c35fcf8da3-logs\") pod \"placement-66b697b848-h98fb\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.046485 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-internal-tls-certs\") pod \"placement-66b697b848-h98fb\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.046522 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71251c83-5af4-4373-8a77-0522fcda630e-config-data\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.046581 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71251c83-5af4-4373-8a77-0522fcda630e-scripts\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.046605 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-public-tls-certs\") pod \"placement-66b697b848-h98fb\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.046622 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md7bq\" (UniqueName: \"kubernetes.io/projected/88662989-310c-4dc2-9f6e-26c35fcf8da3-kube-api-access-md7bq\") pod \"placement-66b697b848-h98fb\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.046651 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwvqk\" (UniqueName: \"kubernetes.io/projected/71251c83-5af4-4373-8a77-0522fcda630e-kube-api-access-fwvqk\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.046666 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71251c83-5af4-4373-8a77-0522fcda630e-public-tls-certs\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.046692 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71251c83-5af4-4373-8a77-0522fcda630e-combined-ca-bundle\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.046730 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-config-data\") pod \"placement-66b697b848-h98fb\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.046751 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/71251c83-5af4-4373-8a77-0522fcda630e-credential-keys\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.046771 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/71251c83-5af4-4373-8a77-0522fcda630e-fernet-keys\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.046794 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-scripts\") pod \"placement-66b697b848-h98fb\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.046811 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-combined-ca-bundle\") pod \"placement-66b697b848-h98fb\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.147709 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-fbrpk\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.147750 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-fbrpk\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.147776 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-config-data\") pod \"placement-66b697b848-h98fb\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.147796 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/71251c83-5af4-4373-8a77-0522fcda630e-credential-keys\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.147817 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-config\") pod \"dnsmasq-dns-55f844cf75-fbrpk\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.147832 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/71251c83-5af4-4373-8a77-0522fcda630e-fernet-keys\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.147858 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-scripts\") pod \"placement-66b697b848-h98fb\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.147876 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-combined-ca-bundle\") pod \"placement-66b697b848-h98fb\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.147901 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71251c83-5af4-4373-8a77-0522fcda630e-internal-tls-certs\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.147916 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-fbrpk\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.147940 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88662989-310c-4dc2-9f6e-26c35fcf8da3-logs\") pod \"placement-66b697b848-h98fb\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.147958 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-internal-tls-certs\") pod \"placement-66b697b848-h98fb\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.147979 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71251c83-5af4-4373-8a77-0522fcda630e-config-data\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.148014 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71251c83-5af4-4373-8a77-0522fcda630e-scripts\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.148033 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg9l6\" (UniqueName: \"kubernetes.io/projected/3480584e-cd7c-4621-a6b0-a64b6a6611ce-kube-api-access-tg9l6\") pod \"dnsmasq-dns-55f844cf75-fbrpk\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.148057 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-public-tls-certs\") pod \"placement-66b697b848-h98fb\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.148076 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md7bq\" (UniqueName: \"kubernetes.io/projected/88662989-310c-4dc2-9f6e-26c35fcf8da3-kube-api-access-md7bq\") pod \"placement-66b697b848-h98fb\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.148101 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwvqk\" (UniqueName: \"kubernetes.io/projected/71251c83-5af4-4373-8a77-0522fcda630e-kube-api-access-fwvqk\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.148117 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71251c83-5af4-4373-8a77-0522fcda630e-public-tls-certs\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.148142 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71251c83-5af4-4373-8a77-0522fcda630e-combined-ca-bundle\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.148163 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-dns-svc\") pod \"dnsmasq-dns-55f844cf75-fbrpk\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.154936 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/71251c83-5af4-4373-8a77-0522fcda630e-fernet-keys\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.155138 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71251c83-5af4-4373-8a77-0522fcda630e-scripts\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.155368 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88662989-310c-4dc2-9f6e-26c35fcf8da3-logs\") pod \"placement-66b697b848-h98fb\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.155723 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-scripts\") pod \"placement-66b697b848-h98fb\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.155770 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-internal-tls-certs\") pod \"placement-66b697b848-h98fb\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.155834 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71251c83-5af4-4373-8a77-0522fcda630e-config-data\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.156899 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-public-tls-certs\") pod \"placement-66b697b848-h98fb\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.156983 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-combined-ca-bundle\") pod \"placement-66b697b848-h98fb\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.157245 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/71251c83-5af4-4373-8a77-0522fcda630e-credential-keys\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.160223 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-config-data\") pod \"placement-66b697b848-h98fb\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.163996 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71251c83-5af4-4373-8a77-0522fcda630e-combined-ca-bundle\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.167614 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71251c83-5af4-4373-8a77-0522fcda630e-public-tls-certs\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.168425 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71251c83-5af4-4373-8a77-0522fcda630e-internal-tls-certs\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.192854 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md7bq\" (UniqueName: \"kubernetes.io/projected/88662989-310c-4dc2-9f6e-26c35fcf8da3-kube-api-access-md7bq\") pod \"placement-66b697b848-h98fb\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.194192 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwvqk\" (UniqueName: \"kubernetes.io/projected/71251c83-5af4-4373-8a77-0522fcda630e-kube-api-access-fwvqk\") pod \"keystone-79d6d6d4df-7xvpd\" (UID: \"71251c83-5af4-4373-8a77-0522fcda630e\") " pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.197192 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.235954 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.249290 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg9l6\" (UniqueName: \"kubernetes.io/projected/3480584e-cd7c-4621-a6b0-a64b6a6611ce-kube-api-access-tg9l6\") pod \"dnsmasq-dns-55f844cf75-fbrpk\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.249399 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-dns-svc\") pod \"dnsmasq-dns-55f844cf75-fbrpk\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.249460 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-fbrpk\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.249489 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-fbrpk\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.249517 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-config\") pod \"dnsmasq-dns-55f844cf75-fbrpk\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.249582 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-fbrpk\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.250671 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-fbrpk\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.252052 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-dns-svc\") pod \"dnsmasq-dns-55f844cf75-fbrpk\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.252767 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-fbrpk\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.253373 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-fbrpk\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.254154 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-config\") pod \"dnsmasq-dns-55f844cf75-fbrpk\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.266118 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-584b87959d-4rkvv"] Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.267457 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-584b87959d-4rkvv" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.284007 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.284249 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.284583 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nwdv4" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.292218 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.304492 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-cd4978596-vxlhp"] Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.305870 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.307781 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg9l6\" (UniqueName: \"kubernetes.io/projected/3480584e-cd7c-4621-a6b0-a64b6a6611ce-kube-api-access-tg9l6\") pod \"dnsmasq-dns-55f844cf75-fbrpk\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.333108 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-584b87959d-4rkvv"] Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.356374 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-ovndb-tls-certs\") pod \"neutron-584b87959d-4rkvv\" (UID: \"1dee501b-e122-4870-b3bb-4096d3dcc975\") " pod="openstack/neutron-584b87959d-4rkvv" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.356862 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-combined-ca-bundle\") pod \"neutron-584b87959d-4rkvv\" (UID: \"1dee501b-e122-4870-b3bb-4096d3dcc975\") " pod="openstack/neutron-584b87959d-4rkvv" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.357005 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-config\") pod \"neutron-584b87959d-4rkvv\" (UID: \"1dee501b-e122-4870-b3bb-4096d3dcc975\") " pod="openstack/neutron-584b87959d-4rkvv" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.357144 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-httpd-config\") pod \"neutron-584b87959d-4rkvv\" (UID: \"1dee501b-e122-4870-b3bb-4096d3dcc975\") " pod="openstack/neutron-584b87959d-4rkvv" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.357009 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cd4978596-vxlhp"] Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.357414 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n76j\" (UniqueName: \"kubernetes.io/projected/1dee501b-e122-4870-b3bb-4096d3dcc975-kube-api-access-4n76j\") pod \"neutron-584b87959d-4rkvv\" (UID: \"1dee501b-e122-4870-b3bb-4096d3dcc975\") " pod="openstack/neutron-584b87959d-4rkvv" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.357495 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.458868 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-httpd-config\") pod \"neutron-584b87959d-4rkvv\" (UID: \"1dee501b-e122-4870-b3bb-4096d3dcc975\") " pod="openstack/neutron-584b87959d-4rkvv" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.458931 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb0085d2-b778-45fa-9352-ae25b43713c1-scripts\") pod \"placement-cd4978596-vxlhp\" (UID: \"fb0085d2-b778-45fa-9352-ae25b43713c1\") " pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.458977 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb0085d2-b778-45fa-9352-ae25b43713c1-config-data\") pod \"placement-cd4978596-vxlhp\" (UID: \"fb0085d2-b778-45fa-9352-ae25b43713c1\") " pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.459016 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n76j\" (UniqueName: \"kubernetes.io/projected/1dee501b-e122-4870-b3bb-4096d3dcc975-kube-api-access-4n76j\") pod \"neutron-584b87959d-4rkvv\" (UID: \"1dee501b-e122-4870-b3bb-4096d3dcc975\") " pod="openstack/neutron-584b87959d-4rkvv" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.459033 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb0085d2-b778-45fa-9352-ae25b43713c1-logs\") pod \"placement-cd4978596-vxlhp\" (UID: \"fb0085d2-b778-45fa-9352-ae25b43713c1\") " pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.459052 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d5qs\" (UniqueName: \"kubernetes.io/projected/fb0085d2-b778-45fa-9352-ae25b43713c1-kube-api-access-9d5qs\") pod \"placement-cd4978596-vxlhp\" (UID: \"fb0085d2-b778-45fa-9352-ae25b43713c1\") " pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.459084 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0085d2-b778-45fa-9352-ae25b43713c1-combined-ca-bundle\") pod \"placement-cd4978596-vxlhp\" (UID: \"fb0085d2-b778-45fa-9352-ae25b43713c1\") " pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.459109 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-ovndb-tls-certs\") pod \"neutron-584b87959d-4rkvv\" (UID: \"1dee501b-e122-4870-b3bb-4096d3dcc975\") " pod="openstack/neutron-584b87959d-4rkvv" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.459130 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-combined-ca-bundle\") pod \"neutron-584b87959d-4rkvv\" (UID: \"1dee501b-e122-4870-b3bb-4096d3dcc975\") " pod="openstack/neutron-584b87959d-4rkvv" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.459152 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0085d2-b778-45fa-9352-ae25b43713c1-internal-tls-certs\") pod \"placement-cd4978596-vxlhp\" (UID: \"fb0085d2-b778-45fa-9352-ae25b43713c1\") " pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.459180 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-config\") pod \"neutron-584b87959d-4rkvv\" (UID: \"1dee501b-e122-4870-b3bb-4096d3dcc975\") " pod="openstack/neutron-584b87959d-4rkvv" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.459206 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0085d2-b778-45fa-9352-ae25b43713c1-public-tls-certs\") pod \"placement-cd4978596-vxlhp\" (UID: \"fb0085d2-b778-45fa-9352-ae25b43713c1\") " pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.474150 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-ovndb-tls-certs\") pod \"neutron-584b87959d-4rkvv\" (UID: \"1dee501b-e122-4870-b3bb-4096d3dcc975\") " pod="openstack/neutron-584b87959d-4rkvv" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.474445 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-combined-ca-bundle\") pod \"neutron-584b87959d-4rkvv\" (UID: \"1dee501b-e122-4870-b3bb-4096d3dcc975\") " pod="openstack/neutron-584b87959d-4rkvv" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.474904 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-httpd-config\") pod \"neutron-584b87959d-4rkvv\" (UID: \"1dee501b-e122-4870-b3bb-4096d3dcc975\") " pod="openstack/neutron-584b87959d-4rkvv" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.480463 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-config\") pod \"neutron-584b87959d-4rkvv\" (UID: \"1dee501b-e122-4870-b3bb-4096d3dcc975\") " pod="openstack/neutron-584b87959d-4rkvv" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.501838 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n76j\" (UniqueName: \"kubernetes.io/projected/1dee501b-e122-4870-b3bb-4096d3dcc975-kube-api-access-4n76j\") pod \"neutron-584b87959d-4rkvv\" (UID: \"1dee501b-e122-4870-b3bb-4096d3dcc975\") " pod="openstack/neutron-584b87959d-4rkvv" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.560826 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb0085d2-b778-45fa-9352-ae25b43713c1-scripts\") pod \"placement-cd4978596-vxlhp\" (UID: \"fb0085d2-b778-45fa-9352-ae25b43713c1\") " pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.560907 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb0085d2-b778-45fa-9352-ae25b43713c1-config-data\") pod \"placement-cd4978596-vxlhp\" (UID: \"fb0085d2-b778-45fa-9352-ae25b43713c1\") " pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.560955 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb0085d2-b778-45fa-9352-ae25b43713c1-logs\") pod \"placement-cd4978596-vxlhp\" (UID: \"fb0085d2-b778-45fa-9352-ae25b43713c1\") " pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.560982 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d5qs\" (UniqueName: \"kubernetes.io/projected/fb0085d2-b778-45fa-9352-ae25b43713c1-kube-api-access-9d5qs\") pod \"placement-cd4978596-vxlhp\" (UID: \"fb0085d2-b778-45fa-9352-ae25b43713c1\") " pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.561034 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0085d2-b778-45fa-9352-ae25b43713c1-combined-ca-bundle\") pod \"placement-cd4978596-vxlhp\" (UID: \"fb0085d2-b778-45fa-9352-ae25b43713c1\") " pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.561090 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0085d2-b778-45fa-9352-ae25b43713c1-internal-tls-certs\") pod \"placement-cd4978596-vxlhp\" (UID: \"fb0085d2-b778-45fa-9352-ae25b43713c1\") " pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.561144 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0085d2-b778-45fa-9352-ae25b43713c1-public-tls-certs\") pod \"placement-cd4978596-vxlhp\" (UID: \"fb0085d2-b778-45fa-9352-ae25b43713c1\") " pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.562728 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb0085d2-b778-45fa-9352-ae25b43713c1-logs\") pod \"placement-cd4978596-vxlhp\" (UID: \"fb0085d2-b778-45fa-9352-ae25b43713c1\") " pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.570196 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb0085d2-b778-45fa-9352-ae25b43713c1-config-data\") pod \"placement-cd4978596-vxlhp\" (UID: \"fb0085d2-b778-45fa-9352-ae25b43713c1\") " pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.571174 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0085d2-b778-45fa-9352-ae25b43713c1-combined-ca-bundle\") pod \"placement-cd4978596-vxlhp\" (UID: \"fb0085d2-b778-45fa-9352-ae25b43713c1\") " pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.576225 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0085d2-b778-45fa-9352-ae25b43713c1-internal-tls-certs\") pod \"placement-cd4978596-vxlhp\" (UID: \"fb0085d2-b778-45fa-9352-ae25b43713c1\") " pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.581532 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb0085d2-b778-45fa-9352-ae25b43713c1-scripts\") pod \"placement-cd4978596-vxlhp\" (UID: \"fb0085d2-b778-45fa-9352-ae25b43713c1\") " pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.584120 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0085d2-b778-45fa-9352-ae25b43713c1-public-tls-certs\") pod \"placement-cd4978596-vxlhp\" (UID: \"fb0085d2-b778-45fa-9352-ae25b43713c1\") " pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.649522 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d5qs\" (UniqueName: \"kubernetes.io/projected/fb0085d2-b778-45fa-9352-ae25b43713c1-kube-api-access-9d5qs\") pod \"placement-cd4978596-vxlhp\" (UID: \"fb0085d2-b778-45fa-9352-ae25b43713c1\") " pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.673468 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-584b87959d-4rkvv" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.761835 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.763199 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.911687 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:07 crc kubenswrapper[4955]: I0202 13:19:07.963538 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-79d6d6d4df-7xvpd"] Feb 02 13:19:08 crc kubenswrapper[4955]: I0202 13:19:08.139230 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-66b697b848-h98fb"] Feb 02 13:19:08 crc kubenswrapper[4955]: W0202 13:19:08.142154 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88662989_310c_4dc2_9f6e_26c35fcf8da3.slice/crio-5f241f5b97653e84023079556f0819c56e085fa56bc0a805447a8db40d50e3b0 WatchSource:0}: Error finding container 5f241f5b97653e84023079556f0819c56e085fa56bc0a805447a8db40d50e3b0: Status 404 returned error can't find the container with id 5f241f5b97653e84023079556f0819c56e085fa56bc0a805447a8db40d50e3b0 Feb 02 13:19:08 crc kubenswrapper[4955]: I0202 13:19:08.198122 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-fbrpk"] Feb 02 13:19:08 crc kubenswrapper[4955]: I0202 13:19:08.480405 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-584b87959d-4rkvv"] Feb 02 13:19:08 crc kubenswrapper[4955]: I0202 13:19:08.530590 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cd4978596-vxlhp"] Feb 02 13:19:08 crc kubenswrapper[4955]: W0202 13:19:08.532926 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb0085d2_b778_45fa_9352_ae25b43713c1.slice/crio-b54096995dee79ce700fd43f728cda98902688e4617237646a2f7707dd8e772a WatchSource:0}: Error finding container b54096995dee79ce700fd43f728cda98902688e4617237646a2f7707dd8e772a: Status 404 returned error can't find the container with id b54096995dee79ce700fd43f728cda98902688e4617237646a2f7707dd8e772a Feb 02 13:19:08 crc kubenswrapper[4955]: I0202 13:19:08.765304 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79d6d6d4df-7xvpd" event={"ID":"71251c83-5af4-4373-8a77-0522fcda630e","Type":"ContainerStarted","Data":"558f6a2ca9e3d759e3bddd4294c31fdacc85008da3819d279a2c53331321fee5"} Feb 02 13:19:08 crc kubenswrapper[4955]: I0202 13:19:08.765346 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79d6d6d4df-7xvpd" event={"ID":"71251c83-5af4-4373-8a77-0522fcda630e","Type":"ContainerStarted","Data":"e97bfca1023d0ca9513df15b9b7fcd1de7a36154321da3d596cd61a9d4d47416"} Feb 02 13:19:08 crc kubenswrapper[4955]: I0202 13:19:08.766488 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:08 crc kubenswrapper[4955]: I0202 13:19:08.768324 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66b697b848-h98fb" event={"ID":"88662989-310c-4dc2-9f6e-26c35fcf8da3","Type":"ContainerStarted","Data":"5f241f5b97653e84023079556f0819c56e085fa56bc0a805447a8db40d50e3b0"} Feb 02 13:19:08 crc kubenswrapper[4955]: I0202 13:19:08.771071 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" event={"ID":"3480584e-cd7c-4621-a6b0-a64b6a6611ce","Type":"ContainerStarted","Data":"24269510a64585fa60103a313b0f1bc3eece2433bff6d3445149cdf4e4a545cc"} Feb 02 13:19:08 crc kubenswrapper[4955]: I0202 13:19:08.776071 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cd4978596-vxlhp" event={"ID":"fb0085d2-b778-45fa-9352-ae25b43713c1","Type":"ContainerStarted","Data":"b54096995dee79ce700fd43f728cda98902688e4617237646a2f7707dd8e772a"} Feb 02 13:19:08 crc kubenswrapper[4955]: I0202 13:19:08.797513 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-79d6d6d4df-7xvpd" podStartSLOduration=2.797487651 podStartE2EDuration="2.797487651s" podCreationTimestamp="2026-02-02 13:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:19:08.790500243 +0000 UTC m=+999.702836703" watchObservedRunningTime="2026-02-02 13:19:08.797487651 +0000 UTC m=+999.709824101" Feb 02 13:19:08 crc kubenswrapper[4955]: I0202 13:19:08.799984 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-584b87959d-4rkvv" event={"ID":"1dee501b-e122-4870-b3bb-4096d3dcc975","Type":"ContainerStarted","Data":"dcf6a7ffd22950608c2af4eee9a8954041c4968fc1d752ec7ee0fbe11cbda077"} Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.706760 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-74cbd57d57-fclbt"] Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.708968 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.712627 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.713726 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.737119 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74cbd57d57-fclbt"] Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.810218 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drmcz\" (UniqueName: \"kubernetes.io/projected/41553210-5298-4320-8317-50cb36029594-kube-api-access-drmcz\") pod \"neutron-74cbd57d57-fclbt\" (UID: \"41553210-5298-4320-8317-50cb36029594\") " pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.810294 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41553210-5298-4320-8317-50cb36029594-httpd-config\") pod \"neutron-74cbd57d57-fclbt\" (UID: \"41553210-5298-4320-8317-50cb36029594\") " pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.810332 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/41553210-5298-4320-8317-50cb36029594-config\") pod \"neutron-74cbd57d57-fclbt\" (UID: \"41553210-5298-4320-8317-50cb36029594\") " pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.810371 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41553210-5298-4320-8317-50cb36029594-public-tls-certs\") pod \"neutron-74cbd57d57-fclbt\" (UID: \"41553210-5298-4320-8317-50cb36029594\") " pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.810462 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41553210-5298-4320-8317-50cb36029594-ovndb-tls-certs\") pod \"neutron-74cbd57d57-fclbt\" (UID: \"41553210-5298-4320-8317-50cb36029594\") " pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.810536 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41553210-5298-4320-8317-50cb36029594-internal-tls-certs\") pod \"neutron-74cbd57d57-fclbt\" (UID: \"41553210-5298-4320-8317-50cb36029594\") " pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.810602 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41553210-5298-4320-8317-50cb36029594-combined-ca-bundle\") pod \"neutron-74cbd57d57-fclbt\" (UID: \"41553210-5298-4320-8317-50cb36029594\") " pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.822566 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66b697b848-h98fb" event={"ID":"88662989-310c-4dc2-9f6e-26c35fcf8da3","Type":"ContainerStarted","Data":"c3613eca5a9f9c282a5cf5acd5a38d90260113c20ac5d7f04d136af842bbba44"} Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.822606 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66b697b848-h98fb" event={"ID":"88662989-310c-4dc2-9f6e-26c35fcf8da3","Type":"ContainerStarted","Data":"b2f9615f5d8cd15a679a259d911b5d2767d61c7dbbb527b3b4eb67a3fd46f3ca"} Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.822657 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.822709 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.825844 4955 generic.go:334] "Generic (PLEG): container finished" podID="3480584e-cd7c-4621-a6b0-a64b6a6611ce" containerID="916530111b3eea531111d0824f384606903328cdf1984016ffb9351b449d0b08" exitCode=0 Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.825919 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" event={"ID":"3480584e-cd7c-4621-a6b0-a64b6a6611ce","Type":"ContainerDied","Data":"916530111b3eea531111d0824f384606903328cdf1984016ffb9351b449d0b08"} Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.835369 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cd4978596-vxlhp" event={"ID":"fb0085d2-b778-45fa-9352-ae25b43713c1","Type":"ContainerStarted","Data":"83b44d5cb056a594fe0e6f79e50123d2e3c05409cf4330ff7b985392d58cdbe2"} Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.835418 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cd4978596-vxlhp" event={"ID":"fb0085d2-b778-45fa-9352-ae25b43713c1","Type":"ContainerStarted","Data":"cafffa846cb702d9258e642187a5a9f2ee778247a681d176f9c5a27cf75679e9"} Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.836224 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.836253 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.843230 4955 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.843261 4955 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.843313 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-584b87959d-4rkvv" event={"ID":"1dee501b-e122-4870-b3bb-4096d3dcc975","Type":"ContainerStarted","Data":"5c218e0830897ee24d6a272a3af19e0b501794b096ae5959823a6705204402d8"} Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.843346 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-584b87959d-4rkvv" event={"ID":"1dee501b-e122-4870-b3bb-4096d3dcc975","Type":"ContainerStarted","Data":"6fb377675d21b30988b781602e0ff194c6d3a5ec8482f3ed3372259fb696f250"} Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.912654 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drmcz\" (UniqueName: \"kubernetes.io/projected/41553210-5298-4320-8317-50cb36029594-kube-api-access-drmcz\") pod \"neutron-74cbd57d57-fclbt\" (UID: \"41553210-5298-4320-8317-50cb36029594\") " pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.912938 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41553210-5298-4320-8317-50cb36029594-httpd-config\") pod \"neutron-74cbd57d57-fclbt\" (UID: \"41553210-5298-4320-8317-50cb36029594\") " pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.912963 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/41553210-5298-4320-8317-50cb36029594-config\") pod \"neutron-74cbd57d57-fclbt\" (UID: \"41553210-5298-4320-8317-50cb36029594\") " pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.912987 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41553210-5298-4320-8317-50cb36029594-public-tls-certs\") pod \"neutron-74cbd57d57-fclbt\" (UID: \"41553210-5298-4320-8317-50cb36029594\") " pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.913021 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41553210-5298-4320-8317-50cb36029594-ovndb-tls-certs\") pod \"neutron-74cbd57d57-fclbt\" (UID: \"41553210-5298-4320-8317-50cb36029594\") " pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.913057 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41553210-5298-4320-8317-50cb36029594-internal-tls-certs\") pod \"neutron-74cbd57d57-fclbt\" (UID: \"41553210-5298-4320-8317-50cb36029594\") " pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.913095 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41553210-5298-4320-8317-50cb36029594-combined-ca-bundle\") pod \"neutron-74cbd57d57-fclbt\" (UID: \"41553210-5298-4320-8317-50cb36029594\") " pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.927202 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-584b87959d-4rkvv" podStartSLOduration=2.92716628 podStartE2EDuration="2.92716628s" podCreationTimestamp="2026-02-02 13:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:19:09.916762409 +0000 UTC m=+1000.829098869" watchObservedRunningTime="2026-02-02 13:19:09.92716628 +0000 UTC m=+1000.839502750" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.965372 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-cd4978596-vxlhp" podStartSLOduration=2.965353521 podStartE2EDuration="2.965353521s" podCreationTimestamp="2026-02-02 13:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:19:09.963295962 +0000 UTC m=+1000.875632412" watchObservedRunningTime="2026-02-02 13:19:09.965353521 +0000 UTC m=+1000.877689971" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.982574 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41553210-5298-4320-8317-50cb36029594-httpd-config\") pod \"neutron-74cbd57d57-fclbt\" (UID: \"41553210-5298-4320-8317-50cb36029594\") " pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.984415 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41553210-5298-4320-8317-50cb36029594-combined-ca-bundle\") pod \"neutron-74cbd57d57-fclbt\" (UID: \"41553210-5298-4320-8317-50cb36029594\") " pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:09 crc kubenswrapper[4955]: I0202 13:19:09.996291 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41553210-5298-4320-8317-50cb36029594-internal-tls-certs\") pod \"neutron-74cbd57d57-fclbt\" (UID: \"41553210-5298-4320-8317-50cb36029594\") " pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:10 crc kubenswrapper[4955]: I0202 13:19:10.002204 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41553210-5298-4320-8317-50cb36029594-ovndb-tls-certs\") pod \"neutron-74cbd57d57-fclbt\" (UID: \"41553210-5298-4320-8317-50cb36029594\") " pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:10 crc kubenswrapper[4955]: I0202 13:19:10.002770 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drmcz\" (UniqueName: \"kubernetes.io/projected/41553210-5298-4320-8317-50cb36029594-kube-api-access-drmcz\") pod \"neutron-74cbd57d57-fclbt\" (UID: \"41553210-5298-4320-8317-50cb36029594\") " pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:10 crc kubenswrapper[4955]: I0202 13:19:10.003276 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/41553210-5298-4320-8317-50cb36029594-config\") pod \"neutron-74cbd57d57-fclbt\" (UID: \"41553210-5298-4320-8317-50cb36029594\") " pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:10 crc kubenswrapper[4955]: I0202 13:19:10.004312 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41553210-5298-4320-8317-50cb36029594-public-tls-certs\") pod \"neutron-74cbd57d57-fclbt\" (UID: \"41553210-5298-4320-8317-50cb36029594\") " pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:10 crc kubenswrapper[4955]: I0202 13:19:10.028438 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-66b697b848-h98fb" podStartSLOduration=4.028414301 podStartE2EDuration="4.028414301s" podCreationTimestamp="2026-02-02 13:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:19:10.014007124 +0000 UTC m=+1000.926343584" watchObservedRunningTime="2026-02-02 13:19:10.028414301 +0000 UTC m=+1000.940750751" Feb 02 13:19:10 crc kubenswrapper[4955]: I0202 13:19:10.047678 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:10 crc kubenswrapper[4955]: I0202 13:19:10.874230 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-584b87959d-4rkvv" Feb 02 13:19:10 crc kubenswrapper[4955]: I0202 13:19:10.890436 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 13:19:10 crc kubenswrapper[4955]: I0202 13:19:10.892045 4955 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 13:19:11 crc kubenswrapper[4955]: I0202 13:19:11.291494 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74cbd57d57-fclbt"] Feb 02 13:19:11 crc kubenswrapper[4955]: I0202 13:19:11.433179 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 13:19:11 crc kubenswrapper[4955]: I0202 13:19:11.889953 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74cbd57d57-fclbt" event={"ID":"41553210-5298-4320-8317-50cb36029594","Type":"ContainerStarted","Data":"1004533414b040669e37b6f45b2b474ba79fd484c3551004f59b33d73e1d5488"} Feb 02 13:19:11 crc kubenswrapper[4955]: I0202 13:19:11.890001 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74cbd57d57-fclbt" event={"ID":"41553210-5298-4320-8317-50cb36029594","Type":"ContainerStarted","Data":"12aa2744d8bff5d7c53323d2522eeae065782315bd290d337644dd10e8777bac"} Feb 02 13:19:11 crc kubenswrapper[4955]: I0202 13:19:11.890011 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74cbd57d57-fclbt" event={"ID":"41553210-5298-4320-8317-50cb36029594","Type":"ContainerStarted","Data":"fee61a4261d6375b479e4439b7b2d8ea1d79826dc963d8059a55ef3d1a9e3f5b"} Feb 02 13:19:11 crc kubenswrapper[4955]: I0202 13:19:11.891091 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:11 crc kubenswrapper[4955]: I0202 13:19:11.894195 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" event={"ID":"3480584e-cd7c-4621-a6b0-a64b6a6611ce","Type":"ContainerStarted","Data":"35200518d3cd3e3207558386602b3bb730d62fc0f0aa0bb0acaffe36a98cbb91"} Feb 02 13:19:11 crc kubenswrapper[4955]: I0202 13:19:11.894231 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:11 crc kubenswrapper[4955]: I0202 13:19:11.924198 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-74cbd57d57-fclbt" podStartSLOduration=2.924174072 podStartE2EDuration="2.924174072s" podCreationTimestamp="2026-02-02 13:19:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:19:11.911407395 +0000 UTC m=+1002.823743845" watchObservedRunningTime="2026-02-02 13:19:11.924174072 +0000 UTC m=+1002.836510532" Feb 02 13:19:11 crc kubenswrapper[4955]: I0202 13:19:11.939855 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" podStartSLOduration=5.9398345599999995 podStartE2EDuration="5.93983456s" podCreationTimestamp="2026-02-02 13:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:19:11.935090566 +0000 UTC m=+1002.847427036" watchObservedRunningTime="2026-02-02 13:19:11.93983456 +0000 UTC m=+1002.852171010" Feb 02 13:19:12 crc kubenswrapper[4955]: I0202 13:19:12.008808 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 13:19:12 crc kubenswrapper[4955]: I0202 13:19:12.011309 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 13:19:12 crc kubenswrapper[4955]: I0202 13:19:12.040961 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 13:19:12 crc kubenswrapper[4955]: I0202 13:19:12.057766 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 13:19:12 crc kubenswrapper[4955]: I0202 13:19:12.906644 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4q9j2" event={"ID":"fc8aafab-3905-4e44-ba3e-134253a38a60","Type":"ContainerStarted","Data":"0c399c966c77bb8d5912e2f2d8df59def2b63e84dea1b6395d56bf8303d46592"} Feb 02 13:19:12 crc kubenswrapper[4955]: I0202 13:19:12.907949 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 13:19:12 crc kubenswrapper[4955]: I0202 13:19:12.908001 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 13:19:12 crc kubenswrapper[4955]: I0202 13:19:12.926709 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-4q9j2" podStartSLOduration=2.66491352 podStartE2EDuration="42.926694665s" podCreationTimestamp="2026-02-02 13:18:30 +0000 UTC" firstStartedPulling="2026-02-02 13:18:32.098280798 +0000 UTC m=+963.010617248" lastFinishedPulling="2026-02-02 13:19:12.360061943 +0000 UTC m=+1003.272398393" observedRunningTime="2026-02-02 13:19:12.925974518 +0000 UTC m=+1003.838310968" watchObservedRunningTime="2026-02-02 13:19:12.926694665 +0000 UTC m=+1003.839031115" Feb 02 13:19:13 crc kubenswrapper[4955]: I0202 13:19:13.918519 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wjvr2" event={"ID":"462f37c8-5909-418b-bf1f-58af764957ab","Type":"ContainerStarted","Data":"41c8a504526ef5a12d5e594f03cf85cbbe88b707858ccf962c333b46f02da204"} Feb 02 13:19:13 crc kubenswrapper[4955]: I0202 13:19:13.948256 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-wjvr2" podStartSLOduration=3.358303798 podStartE2EDuration="43.948233577s" podCreationTimestamp="2026-02-02 13:18:30 +0000 UTC" firstStartedPulling="2026-02-02 13:18:31.769800435 +0000 UTC m=+962.682136885" lastFinishedPulling="2026-02-02 13:19:12.359730224 +0000 UTC m=+1003.272066664" observedRunningTime="2026-02-02 13:19:13.93381283 +0000 UTC m=+1004.846149290" watchObservedRunningTime="2026-02-02 13:19:13.948233577 +0000 UTC m=+1004.860570037" Feb 02 13:19:14 crc kubenswrapper[4955]: I0202 13:19:14.918578 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 13:19:14 crc kubenswrapper[4955]: I0202 13:19:14.920139 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 13:19:17 crc kubenswrapper[4955]: I0202 13:19:17.359747 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:17 crc kubenswrapper[4955]: I0202 13:19:17.412589 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-hdvcn"] Feb 02 13:19:17 crc kubenswrapper[4955]: I0202 13:19:17.412860 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" podUID="009d1cab-2cab-475b-9722-881450cee4a5" containerName="dnsmasq-dns" containerID="cri-o://6d4ead3d5948168fb325a5f4ccbb6ce60ebad36bd6f163e2583b07ab6484754f" gracePeriod=10 Feb 02 13:19:17 crc kubenswrapper[4955]: I0202 13:19:17.956585 4955 generic.go:334] "Generic (PLEG): container finished" podID="009d1cab-2cab-475b-9722-881450cee4a5" containerID="6d4ead3d5948168fb325a5f4ccbb6ce60ebad36bd6f163e2583b07ab6484754f" exitCode=0 Feb 02 13:19:17 crc kubenswrapper[4955]: I0202 13:19:17.956661 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" event={"ID":"009d1cab-2cab-475b-9722-881450cee4a5","Type":"ContainerDied","Data":"6d4ead3d5948168fb325a5f4ccbb6ce60ebad36bd6f163e2583b07ab6484754f"} Feb 02 13:19:19 crc kubenswrapper[4955]: I0202 13:19:19.384418 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:19:19 crc kubenswrapper[4955]: I0202 13:19:19.510756 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-dns-swift-storage-0\") pod \"009d1cab-2cab-475b-9722-881450cee4a5\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " Feb 02 13:19:19 crc kubenswrapper[4955]: I0202 13:19:19.510831 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-dns-svc\") pod \"009d1cab-2cab-475b-9722-881450cee4a5\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " Feb 02 13:19:19 crc kubenswrapper[4955]: I0202 13:19:19.510865 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-config\") pod \"009d1cab-2cab-475b-9722-881450cee4a5\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " Feb 02 13:19:19 crc kubenswrapper[4955]: I0202 13:19:19.510926 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-ovsdbserver-sb\") pod \"009d1cab-2cab-475b-9722-881450cee4a5\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " Feb 02 13:19:19 crc kubenswrapper[4955]: I0202 13:19:19.512523 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-ovsdbserver-nb\") pod \"009d1cab-2cab-475b-9722-881450cee4a5\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " Feb 02 13:19:19 crc kubenswrapper[4955]: I0202 13:19:19.512611 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2sbm\" (UniqueName: \"kubernetes.io/projected/009d1cab-2cab-475b-9722-881450cee4a5-kube-api-access-c2sbm\") pod \"009d1cab-2cab-475b-9722-881450cee4a5\" (UID: \"009d1cab-2cab-475b-9722-881450cee4a5\") " Feb 02 13:19:19 crc kubenswrapper[4955]: I0202 13:19:19.518163 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/009d1cab-2cab-475b-9722-881450cee4a5-kube-api-access-c2sbm" (OuterVolumeSpecName: "kube-api-access-c2sbm") pod "009d1cab-2cab-475b-9722-881450cee4a5" (UID: "009d1cab-2cab-475b-9722-881450cee4a5"). InnerVolumeSpecName "kube-api-access-c2sbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:19:19 crc kubenswrapper[4955]: I0202 13:19:19.562700 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "009d1cab-2cab-475b-9722-881450cee4a5" (UID: "009d1cab-2cab-475b-9722-881450cee4a5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:19:19 crc kubenswrapper[4955]: I0202 13:19:19.565471 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "009d1cab-2cab-475b-9722-881450cee4a5" (UID: "009d1cab-2cab-475b-9722-881450cee4a5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:19:19 crc kubenswrapper[4955]: I0202 13:19:19.565651 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "009d1cab-2cab-475b-9722-881450cee4a5" (UID: "009d1cab-2cab-475b-9722-881450cee4a5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:19:19 crc kubenswrapper[4955]: I0202 13:19:19.566967 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-config" (OuterVolumeSpecName: "config") pod "009d1cab-2cab-475b-9722-881450cee4a5" (UID: "009d1cab-2cab-475b-9722-881450cee4a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:19:19 crc kubenswrapper[4955]: I0202 13:19:19.568808 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "009d1cab-2cab-475b-9722-881450cee4a5" (UID: "009d1cab-2cab-475b-9722-881450cee4a5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:19:19 crc kubenswrapper[4955]: I0202 13:19:19.616216 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:19 crc kubenswrapper[4955]: I0202 13:19:19.616270 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:19 crc kubenswrapper[4955]: I0202 13:19:19.616670 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2sbm\" (UniqueName: \"kubernetes.io/projected/009d1cab-2cab-475b-9722-881450cee4a5-kube-api-access-c2sbm\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:19 crc kubenswrapper[4955]: I0202 13:19:19.616688 4955 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:19 crc kubenswrapper[4955]: I0202 13:19:19.616700 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:19 crc kubenswrapper[4955]: I0202 13:19:19.616711 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/009d1cab-2cab-475b-9722-881450cee4a5-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:19 crc kubenswrapper[4955]: I0202 13:19:19.979634 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" event={"ID":"009d1cab-2cab-475b-9722-881450cee4a5","Type":"ContainerDied","Data":"d652a593f5bd055d0b3c1168231edcccfd8614163b6d045c593e735248c7c236"} Feb 02 13:19:19 crc kubenswrapper[4955]: I0202 13:19:19.979927 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-hdvcn" Feb 02 13:19:19 crc kubenswrapper[4955]: I0202 13:19:19.979968 4955 scope.go:117] "RemoveContainer" containerID="6d4ead3d5948168fb325a5f4ccbb6ce60ebad36bd6f163e2583b07ab6484754f" Feb 02 13:19:19 crc kubenswrapper[4955]: I0202 13:19:19.999164 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-hdvcn"] Feb 02 13:19:20 crc kubenswrapper[4955]: I0202 13:19:20.006705 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-hdvcn"] Feb 02 13:19:20 crc kubenswrapper[4955]: I0202 13:19:20.012601 4955 scope.go:117] "RemoveContainer" containerID="3293e7f19a636ec85bc0c42981450393440d8f8f0bd95f8d279f16ced7186b65" Feb 02 13:19:20 crc kubenswrapper[4955]: I0202 13:19:20.993665 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a510324b-16f3-4585-abc9-ae66997c2987","Type":"ContainerStarted","Data":"473c6ef09c99a422721897e64cb2666d559c1e381e602f5781431d61fb8137f1"} Feb 02 13:19:20 crc kubenswrapper[4955]: I0202 13:19:20.993833 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a510324b-16f3-4585-abc9-ae66997c2987" containerName="ceilometer-central-agent" containerID="cri-o://0fa0e8a211764f1a621af9c25b87737c29a0e60393abf27277a89be3b8a56953" gracePeriod=30 Feb 02 13:19:20 crc kubenswrapper[4955]: I0202 13:19:20.994102 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 13:19:20 crc kubenswrapper[4955]: I0202 13:19:20.994130 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a510324b-16f3-4585-abc9-ae66997c2987" containerName="proxy-httpd" containerID="cri-o://473c6ef09c99a422721897e64cb2666d559c1e381e602f5781431d61fb8137f1" gracePeriod=30 Feb 02 13:19:20 crc kubenswrapper[4955]: I0202 13:19:20.994199 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a510324b-16f3-4585-abc9-ae66997c2987" containerName="sg-core" containerID="cri-o://2f9c9c7d4200fa74fca09274cb3e024bbb889e2e1ee166d4f9f5fc15f61ec777" gracePeriod=30 Feb 02 13:19:20 crc kubenswrapper[4955]: I0202 13:19:20.994187 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a510324b-16f3-4585-abc9-ae66997c2987" containerName="ceilometer-notification-agent" containerID="cri-o://cde878752e8a45393e592132225b1cd2ca435b8c4a23cbbbff193d2dceb3d78d" gracePeriod=30 Feb 02 13:19:21 crc kubenswrapper[4955]: I0202 13:19:21.019917 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.100007647 podStartE2EDuration="51.01989551s" podCreationTimestamp="2026-02-02 13:18:30 +0000 UTC" firstStartedPulling="2026-02-02 13:18:31.948414889 +0000 UTC m=+962.860751339" lastFinishedPulling="2026-02-02 13:19:19.868302752 +0000 UTC m=+1010.780639202" observedRunningTime="2026-02-02 13:19:21.013942555 +0000 UTC m=+1011.926279005" watchObservedRunningTime="2026-02-02 13:19:21.01989551 +0000 UTC m=+1011.932231960" Feb 02 13:19:21 crc kubenswrapper[4955]: I0202 13:19:21.727308 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="009d1cab-2cab-475b-9722-881450cee4a5" path="/var/lib/kubelet/pods/009d1cab-2cab-475b-9722-881450cee4a5/volumes" Feb 02 13:19:22 crc kubenswrapper[4955]: I0202 13:19:22.006841 4955 generic.go:334] "Generic (PLEG): container finished" podID="a510324b-16f3-4585-abc9-ae66997c2987" containerID="473c6ef09c99a422721897e64cb2666d559c1e381e602f5781431d61fb8137f1" exitCode=0 Feb 02 13:19:22 crc kubenswrapper[4955]: I0202 13:19:22.006875 4955 generic.go:334] "Generic (PLEG): container finished" podID="a510324b-16f3-4585-abc9-ae66997c2987" containerID="2f9c9c7d4200fa74fca09274cb3e024bbb889e2e1ee166d4f9f5fc15f61ec777" exitCode=2 Feb 02 13:19:22 crc kubenswrapper[4955]: I0202 13:19:22.006887 4955 generic.go:334] "Generic (PLEG): container finished" podID="a510324b-16f3-4585-abc9-ae66997c2987" containerID="0fa0e8a211764f1a621af9c25b87737c29a0e60393abf27277a89be3b8a56953" exitCode=0 Feb 02 13:19:22 crc kubenswrapper[4955]: I0202 13:19:22.006908 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a510324b-16f3-4585-abc9-ae66997c2987","Type":"ContainerDied","Data":"473c6ef09c99a422721897e64cb2666d559c1e381e602f5781431d61fb8137f1"} Feb 02 13:19:22 crc kubenswrapper[4955]: I0202 13:19:22.007430 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a510324b-16f3-4585-abc9-ae66997c2987","Type":"ContainerDied","Data":"2f9c9c7d4200fa74fca09274cb3e024bbb889e2e1ee166d4f9f5fc15f61ec777"} Feb 02 13:19:22 crc kubenswrapper[4955]: I0202 13:19:22.007443 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a510324b-16f3-4585-abc9-ae66997c2987","Type":"ContainerDied","Data":"0fa0e8a211764f1a621af9c25b87737c29a0e60393abf27277a89be3b8a56953"} Feb 02 13:19:24 crc kubenswrapper[4955]: I0202 13:19:24.023918 4955 generic.go:334] "Generic (PLEG): container finished" podID="fc8aafab-3905-4e44-ba3e-134253a38a60" containerID="0c399c966c77bb8d5912e2f2d8df59def2b63e84dea1b6395d56bf8303d46592" exitCode=0 Feb 02 13:19:24 crc kubenswrapper[4955]: I0202 13:19:24.023977 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4q9j2" event={"ID":"fc8aafab-3905-4e44-ba3e-134253a38a60","Type":"ContainerDied","Data":"0c399c966c77bb8d5912e2f2d8df59def2b63e84dea1b6395d56bf8303d46592"} Feb 02 13:19:24 crc kubenswrapper[4955]: I0202 13:19:24.027860 4955 generic.go:334] "Generic (PLEG): container finished" podID="462f37c8-5909-418b-bf1f-58af764957ab" containerID="41c8a504526ef5a12d5e594f03cf85cbbe88b707858ccf962c333b46f02da204" exitCode=0 Feb 02 13:19:24 crc kubenswrapper[4955]: I0202 13:19:24.027903 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wjvr2" event={"ID":"462f37c8-5909-418b-bf1f-58af764957ab","Type":"ContainerDied","Data":"41c8a504526ef5a12d5e594f03cf85cbbe88b707858ccf962c333b46f02da204"} Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.093122 4955 generic.go:334] "Generic (PLEG): container finished" podID="a510324b-16f3-4585-abc9-ae66997c2987" containerID="cde878752e8a45393e592132225b1cd2ca435b8c4a23cbbbff193d2dceb3d78d" exitCode=0 Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.093291 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a510324b-16f3-4585-abc9-ae66997c2987","Type":"ContainerDied","Data":"cde878752e8a45393e592132225b1cd2ca435b8c4a23cbbbff193d2dceb3d78d"} Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.387680 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.420237 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a510324b-16f3-4585-abc9-ae66997c2987-log-httpd\") pod \"a510324b-16f3-4585-abc9-ae66997c2987\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.420301 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-sg-core-conf-yaml\") pod \"a510324b-16f3-4585-abc9-ae66997c2987\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.420391 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-config-data\") pod \"a510324b-16f3-4585-abc9-ae66997c2987\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.420417 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-scripts\") pod \"a510324b-16f3-4585-abc9-ae66997c2987\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.420439 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-combined-ca-bundle\") pod \"a510324b-16f3-4585-abc9-ae66997c2987\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.420513 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a510324b-16f3-4585-abc9-ae66997c2987-run-httpd\") pod \"a510324b-16f3-4585-abc9-ae66997c2987\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.420752 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvmj9\" (UniqueName: \"kubernetes.io/projected/a510324b-16f3-4585-abc9-ae66997c2987-kube-api-access-zvmj9\") pod \"a510324b-16f3-4585-abc9-ae66997c2987\" (UID: \"a510324b-16f3-4585-abc9-ae66997c2987\") " Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.422250 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a510324b-16f3-4585-abc9-ae66997c2987-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a510324b-16f3-4585-abc9-ae66997c2987" (UID: "a510324b-16f3-4585-abc9-ae66997c2987"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.422292 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a510324b-16f3-4585-abc9-ae66997c2987-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a510324b-16f3-4585-abc9-ae66997c2987" (UID: "a510324b-16f3-4585-abc9-ae66997c2987"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.434611 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-scripts" (OuterVolumeSpecName: "scripts") pod "a510324b-16f3-4585-abc9-ae66997c2987" (UID: "a510324b-16f3-4585-abc9-ae66997c2987"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.434662 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a510324b-16f3-4585-abc9-ae66997c2987-kube-api-access-zvmj9" (OuterVolumeSpecName: "kube-api-access-zvmj9") pod "a510324b-16f3-4585-abc9-ae66997c2987" (UID: "a510324b-16f3-4585-abc9-ae66997c2987"). InnerVolumeSpecName "kube-api-access-zvmj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.449492 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a510324b-16f3-4585-abc9-ae66997c2987" (UID: "a510324b-16f3-4585-abc9-ae66997c2987"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.496308 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a510324b-16f3-4585-abc9-ae66997c2987" (UID: "a510324b-16f3-4585-abc9-ae66997c2987"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.501718 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wjvr2" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.506472 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4q9j2" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.521877 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc8aafab-3905-4e44-ba3e-134253a38a60-db-sync-config-data\") pod \"fc8aafab-3905-4e44-ba3e-134253a38a60\" (UID: \"fc8aafab-3905-4e44-ba3e-134253a38a60\") " Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.521925 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-combined-ca-bundle\") pod \"462f37c8-5909-418b-bf1f-58af764957ab\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.521943 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-config-data\") pod \"462f37c8-5909-418b-bf1f-58af764957ab\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.521962 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p767d\" (UniqueName: \"kubernetes.io/projected/fc8aafab-3905-4e44-ba3e-134253a38a60-kube-api-access-p767d\") pod \"fc8aafab-3905-4e44-ba3e-134253a38a60\" (UID: \"fc8aafab-3905-4e44-ba3e-134253a38a60\") " Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.521990 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8aafab-3905-4e44-ba3e-134253a38a60-combined-ca-bundle\") pod \"fc8aafab-3905-4e44-ba3e-134253a38a60\" (UID: \"fc8aafab-3905-4e44-ba3e-134253a38a60\") " Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.522005 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-db-sync-config-data\") pod \"462f37c8-5909-418b-bf1f-58af764957ab\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.522024 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2nmh\" (UniqueName: \"kubernetes.io/projected/462f37c8-5909-418b-bf1f-58af764957ab-kube-api-access-g2nmh\") pod \"462f37c8-5909-418b-bf1f-58af764957ab\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.522080 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-scripts\") pod \"462f37c8-5909-418b-bf1f-58af764957ab\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.522121 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/462f37c8-5909-418b-bf1f-58af764957ab-etc-machine-id\") pod \"462f37c8-5909-418b-bf1f-58af764957ab\" (UID: \"462f37c8-5909-418b-bf1f-58af764957ab\") " Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.522428 4955 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a510324b-16f3-4585-abc9-ae66997c2987-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.522447 4955 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.522458 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.522466 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.522475 4955 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a510324b-16f3-4585-abc9-ae66997c2987-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.522484 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvmj9\" (UniqueName: \"kubernetes.io/projected/a510324b-16f3-4585-abc9-ae66997c2987-kube-api-access-zvmj9\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.522533 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/462f37c8-5909-418b-bf1f-58af764957ab-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "462f37c8-5909-418b-bf1f-58af764957ab" (UID: "462f37c8-5909-418b-bf1f-58af764957ab"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.529211 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8aafab-3905-4e44-ba3e-134253a38a60-kube-api-access-p767d" (OuterVolumeSpecName: "kube-api-access-p767d") pod "fc8aafab-3905-4e44-ba3e-134253a38a60" (UID: "fc8aafab-3905-4e44-ba3e-134253a38a60"). InnerVolumeSpecName "kube-api-access-p767d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.537514 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/462f37c8-5909-418b-bf1f-58af764957ab-kube-api-access-g2nmh" (OuterVolumeSpecName: "kube-api-access-g2nmh") pod "462f37c8-5909-418b-bf1f-58af764957ab" (UID: "462f37c8-5909-418b-bf1f-58af764957ab"). InnerVolumeSpecName "kube-api-access-g2nmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.538719 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-scripts" (OuterVolumeSpecName: "scripts") pod "462f37c8-5909-418b-bf1f-58af764957ab" (UID: "462f37c8-5909-418b-bf1f-58af764957ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.538753 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "462f37c8-5909-418b-bf1f-58af764957ab" (UID: "462f37c8-5909-418b-bf1f-58af764957ab"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.538865 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8aafab-3905-4e44-ba3e-134253a38a60-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fc8aafab-3905-4e44-ba3e-134253a38a60" (UID: "fc8aafab-3905-4e44-ba3e-134253a38a60"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.539261 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-config-data" (OuterVolumeSpecName: "config-data") pod "a510324b-16f3-4585-abc9-ae66997c2987" (UID: "a510324b-16f3-4585-abc9-ae66997c2987"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.556920 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "462f37c8-5909-418b-bf1f-58af764957ab" (UID: "462f37c8-5909-418b-bf1f-58af764957ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.566696 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8aafab-3905-4e44-ba3e-134253a38a60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc8aafab-3905-4e44-ba3e-134253a38a60" (UID: "fc8aafab-3905-4e44-ba3e-134253a38a60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.577550 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-config-data" (OuterVolumeSpecName: "config-data") pod "462f37c8-5909-418b-bf1f-58af764957ab" (UID: "462f37c8-5909-418b-bf1f-58af764957ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.623432 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.623472 4955 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/462f37c8-5909-418b-bf1f-58af764957ab-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.623483 4955 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc8aafab-3905-4e44-ba3e-134253a38a60-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.623492 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.623504 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.623513 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p767d\" (UniqueName: \"kubernetes.io/projected/fc8aafab-3905-4e44-ba3e-134253a38a60-kube-api-access-p767d\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.623522 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8aafab-3905-4e44-ba3e-134253a38a60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.623531 4955 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/462f37c8-5909-418b-bf1f-58af764957ab-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.623540 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2nmh\" (UniqueName: \"kubernetes.io/projected/462f37c8-5909-418b-bf1f-58af764957ab-kube-api-access-g2nmh\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:25 crc kubenswrapper[4955]: I0202 13:19:25.623548 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a510324b-16f3-4585-abc9-ae66997c2987-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.112832 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a510324b-16f3-4585-abc9-ae66997c2987","Type":"ContainerDied","Data":"47a58a8e70524af6ed532a402b560bbfc3837bd4a0cc952e54d8f2f45c3f8723"} Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.112890 4955 scope.go:117] "RemoveContainer" containerID="473c6ef09c99a422721897e64cb2666d559c1e381e602f5781431d61fb8137f1" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.112912 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.115709 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4q9j2" event={"ID":"fc8aafab-3905-4e44-ba3e-134253a38a60","Type":"ContainerDied","Data":"e3bcbd6e4db9f5b293f72e0b7467c349975ad1cd4bcd438ef20e4e49c2ce5564"} Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.115754 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3bcbd6e4db9f5b293f72e0b7467c349975ad1cd4bcd438ef20e4e49c2ce5564" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.115772 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4q9j2" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.118128 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wjvr2" event={"ID":"462f37c8-5909-418b-bf1f-58af764957ab","Type":"ContainerDied","Data":"ceea223a8e85bb0f2b19a3514815c0a4410f0e003dd8e57ac234d2fac044d49f"} Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.118163 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wjvr2" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.118180 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ceea223a8e85bb0f2b19a3514815c0a4410f0e003dd8e57ac234d2fac044d49f" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.138004 4955 scope.go:117] "RemoveContainer" containerID="2f9c9c7d4200fa74fca09274cb3e024bbb889e2e1ee166d4f9f5fc15f61ec777" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.143505 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.154613 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.170496 4955 scope.go:117] "RemoveContainer" containerID="cde878752e8a45393e592132225b1cd2ca435b8c4a23cbbbff193d2dceb3d78d" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.173009 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:19:26 crc kubenswrapper[4955]: E0202 13:19:26.173391 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a510324b-16f3-4585-abc9-ae66997c2987" containerName="ceilometer-notification-agent" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.173411 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a510324b-16f3-4585-abc9-ae66997c2987" containerName="ceilometer-notification-agent" Feb 02 13:19:26 crc kubenswrapper[4955]: E0202 13:19:26.173425 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a510324b-16f3-4585-abc9-ae66997c2987" containerName="ceilometer-central-agent" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.173432 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a510324b-16f3-4585-abc9-ae66997c2987" containerName="ceilometer-central-agent" Feb 02 13:19:26 crc kubenswrapper[4955]: E0202 13:19:26.173445 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a510324b-16f3-4585-abc9-ae66997c2987" containerName="proxy-httpd" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.173452 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a510324b-16f3-4585-abc9-ae66997c2987" containerName="proxy-httpd" Feb 02 13:19:26 crc kubenswrapper[4955]: E0202 13:19:26.173465 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc8aafab-3905-4e44-ba3e-134253a38a60" containerName="barbican-db-sync" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.173470 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc8aafab-3905-4e44-ba3e-134253a38a60" containerName="barbican-db-sync" Feb 02 13:19:26 crc kubenswrapper[4955]: E0202 13:19:26.173485 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="009d1cab-2cab-475b-9722-881450cee4a5" containerName="init" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.173490 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="009d1cab-2cab-475b-9722-881450cee4a5" containerName="init" Feb 02 13:19:26 crc kubenswrapper[4955]: E0202 13:19:26.173500 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a510324b-16f3-4585-abc9-ae66997c2987" containerName="sg-core" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.173505 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a510324b-16f3-4585-abc9-ae66997c2987" containerName="sg-core" Feb 02 13:19:26 crc kubenswrapper[4955]: E0202 13:19:26.173516 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="009d1cab-2cab-475b-9722-881450cee4a5" containerName="dnsmasq-dns" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.173522 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="009d1cab-2cab-475b-9722-881450cee4a5" containerName="dnsmasq-dns" Feb 02 13:19:26 crc kubenswrapper[4955]: E0202 13:19:26.173536 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="462f37c8-5909-418b-bf1f-58af764957ab" containerName="cinder-db-sync" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.173541 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="462f37c8-5909-418b-bf1f-58af764957ab" containerName="cinder-db-sync" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.173736 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="009d1cab-2cab-475b-9722-881450cee4a5" containerName="dnsmasq-dns" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.173757 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc8aafab-3905-4e44-ba3e-134253a38a60" containerName="barbican-db-sync" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.173789 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="a510324b-16f3-4585-abc9-ae66997c2987" containerName="ceilometer-central-agent" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.173801 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="462f37c8-5909-418b-bf1f-58af764957ab" containerName="cinder-db-sync" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.173817 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="a510324b-16f3-4585-abc9-ae66997c2987" containerName="ceilometer-notification-agent" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.173829 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="a510324b-16f3-4585-abc9-ae66997c2987" containerName="sg-core" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.173839 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="a510324b-16f3-4585-abc9-ae66997c2987" containerName="proxy-httpd" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.175740 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.178076 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.178424 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.192334 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.265729 4955 scope.go:117] "RemoveContainer" containerID="0fa0e8a211764f1a621af9c25b87737c29a0e60393abf27277a89be3b8a56953" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.342178 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.342584 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r5q8\" (UniqueName: \"kubernetes.io/projected/06f5162a-d775-4658-b6a3-10e528720bcf-kube-api-access-2r5q8\") pod \"ceilometer-0\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.342613 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f5162a-d775-4658-b6a3-10e528720bcf-run-httpd\") pod \"ceilometer-0\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.342670 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.342708 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-config-data\") pod \"ceilometer-0\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.342781 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-scripts\") pod \"ceilometer-0\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.342817 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f5162a-d775-4658-b6a3-10e528720bcf-log-httpd\") pod \"ceilometer-0\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.356608 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-664579f9ff-tk2xr"] Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.358047 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-664579f9ff-tk2xr" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.360941 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.361269 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-td7s9" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.361534 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.381237 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-664579f9ff-tk2xr"] Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.444157 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b940320a-acf4-4bf3-88b4-00a1689be1c5-logs\") pod \"barbican-worker-664579f9ff-tk2xr\" (UID: \"b940320a-acf4-4bf3-88b4-00a1689be1c5\") " pod="openstack/barbican-worker-664579f9ff-tk2xr" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.444220 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlfk6\" (UniqueName: \"kubernetes.io/projected/b940320a-acf4-4bf3-88b4-00a1689be1c5-kube-api-access-hlfk6\") pod \"barbican-worker-664579f9ff-tk2xr\" (UID: \"b940320a-acf4-4bf3-88b4-00a1689be1c5\") " pod="openstack/barbican-worker-664579f9ff-tk2xr" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.444251 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r5q8\" (UniqueName: \"kubernetes.io/projected/06f5162a-d775-4658-b6a3-10e528720bcf-kube-api-access-2r5q8\") pod \"ceilometer-0\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.444275 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f5162a-d775-4658-b6a3-10e528720bcf-run-httpd\") pod \"ceilometer-0\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.444302 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b940320a-acf4-4bf3-88b4-00a1689be1c5-config-data-custom\") pod \"barbican-worker-664579f9ff-tk2xr\" (UID: \"b940320a-acf4-4bf3-88b4-00a1689be1c5\") " pod="openstack/barbican-worker-664579f9ff-tk2xr" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.444335 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b940320a-acf4-4bf3-88b4-00a1689be1c5-config-data\") pod \"barbican-worker-664579f9ff-tk2xr\" (UID: \"b940320a-acf4-4bf3-88b4-00a1689be1c5\") " pod="openstack/barbican-worker-664579f9ff-tk2xr" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.444360 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.444391 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-config-data\") pod \"ceilometer-0\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.444463 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-scripts\") pod \"ceilometer-0\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.444494 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f5162a-d775-4658-b6a3-10e528720bcf-log-httpd\") pod \"ceilometer-0\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.444596 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.444618 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b940320a-acf4-4bf3-88b4-00a1689be1c5-combined-ca-bundle\") pod \"barbican-worker-664579f9ff-tk2xr\" (UID: \"b940320a-acf4-4bf3-88b4-00a1689be1c5\") " pod="openstack/barbican-worker-664579f9ff-tk2xr" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.445413 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f5162a-d775-4658-b6a3-10e528720bcf-run-httpd\") pod \"ceilometer-0\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.455893 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f5162a-d775-4658-b6a3-10e528720bcf-log-httpd\") pod \"ceilometer-0\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.464272 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.471190 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.471970 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-scripts\") pod \"ceilometer-0\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.477461 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-config-data\") pod \"ceilometer-0\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.523086 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6b4bfb45d6-658rw"] Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.524546 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b4bfb45d6-658rw" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.525608 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r5q8\" (UniqueName: \"kubernetes.io/projected/06f5162a-d775-4658-b6a3-10e528720bcf-kube-api-access-2r5q8\") pod \"ceilometer-0\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.529811 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.551686 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b940320a-acf4-4bf3-88b4-00a1689be1c5-combined-ca-bundle\") pod \"barbican-worker-664579f9ff-tk2xr\" (UID: \"b940320a-acf4-4bf3-88b4-00a1689be1c5\") " pod="openstack/barbican-worker-664579f9ff-tk2xr" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.551789 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b940320a-acf4-4bf3-88b4-00a1689be1c5-logs\") pod \"barbican-worker-664579f9ff-tk2xr\" (UID: \"b940320a-acf4-4bf3-88b4-00a1689be1c5\") " pod="openstack/barbican-worker-664579f9ff-tk2xr" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.551814 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlfk6\" (UniqueName: \"kubernetes.io/projected/b940320a-acf4-4bf3-88b4-00a1689be1c5-kube-api-access-hlfk6\") pod \"barbican-worker-664579f9ff-tk2xr\" (UID: \"b940320a-acf4-4bf3-88b4-00a1689be1c5\") " pod="openstack/barbican-worker-664579f9ff-tk2xr" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.551839 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b940320a-acf4-4bf3-88b4-00a1689be1c5-config-data-custom\") pod \"barbican-worker-664579f9ff-tk2xr\" (UID: \"b940320a-acf4-4bf3-88b4-00a1689be1c5\") " pod="openstack/barbican-worker-664579f9ff-tk2xr" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.551862 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b940320a-acf4-4bf3-88b4-00a1689be1c5-config-data\") pod \"barbican-worker-664579f9ff-tk2xr\" (UID: \"b940320a-acf4-4bf3-88b4-00a1689be1c5\") " pod="openstack/barbican-worker-664579f9ff-tk2xr" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.553515 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b940320a-acf4-4bf3-88b4-00a1689be1c5-logs\") pod \"barbican-worker-664579f9ff-tk2xr\" (UID: \"b940320a-acf4-4bf3-88b4-00a1689be1c5\") " pod="openstack/barbican-worker-664579f9ff-tk2xr" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.556161 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.565251 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b4bfb45d6-658rw"] Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.566072 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b940320a-acf4-4bf3-88b4-00a1689be1c5-combined-ca-bundle\") pod \"barbican-worker-664579f9ff-tk2xr\" (UID: \"b940320a-acf4-4bf3-88b4-00a1689be1c5\") " pod="openstack/barbican-worker-664579f9ff-tk2xr" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.573265 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b940320a-acf4-4bf3-88b4-00a1689be1c5-config-data-custom\") pod \"barbican-worker-664579f9ff-tk2xr\" (UID: \"b940320a-acf4-4bf3-88b4-00a1689be1c5\") " pod="openstack/barbican-worker-664579f9ff-tk2xr" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.574686 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b940320a-acf4-4bf3-88b4-00a1689be1c5-config-data\") pod \"barbican-worker-664579f9ff-tk2xr\" (UID: \"b940320a-acf4-4bf3-88b4-00a1689be1c5\") " pod="openstack/barbican-worker-664579f9ff-tk2xr" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.625301 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlfk6\" (UniqueName: \"kubernetes.io/projected/b940320a-acf4-4bf3-88b4-00a1689be1c5-kube-api-access-hlfk6\") pod \"barbican-worker-664579f9ff-tk2xr\" (UID: \"b940320a-acf4-4bf3-88b4-00a1689be1c5\") " pod="openstack/barbican-worker-664579f9ff-tk2xr" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.662623 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8304221-9734-4385-80ea-be1ad2824ac1-combined-ca-bundle\") pod \"barbican-keystone-listener-6b4bfb45d6-658rw\" (UID: \"f8304221-9734-4385-80ea-be1ad2824ac1\") " pod="openstack/barbican-keystone-listener-6b4bfb45d6-658rw" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.662683 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8304221-9734-4385-80ea-be1ad2824ac1-config-data-custom\") pod \"barbican-keystone-listener-6b4bfb45d6-658rw\" (UID: \"f8304221-9734-4385-80ea-be1ad2824ac1\") " pod="openstack/barbican-keystone-listener-6b4bfb45d6-658rw" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.662734 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8304221-9734-4385-80ea-be1ad2824ac1-logs\") pod \"barbican-keystone-listener-6b4bfb45d6-658rw\" (UID: \"f8304221-9734-4385-80ea-be1ad2824ac1\") " pod="openstack/barbican-keystone-listener-6b4bfb45d6-658rw" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.662913 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crdsf\" (UniqueName: \"kubernetes.io/projected/f8304221-9734-4385-80ea-be1ad2824ac1-kube-api-access-crdsf\") pod \"barbican-keystone-listener-6b4bfb45d6-658rw\" (UID: \"f8304221-9734-4385-80ea-be1ad2824ac1\") " pod="openstack/barbican-keystone-listener-6b4bfb45d6-658rw" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.662982 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8304221-9734-4385-80ea-be1ad2824ac1-config-data\") pod \"barbican-keystone-listener-6b4bfb45d6-658rw\" (UID: \"f8304221-9734-4385-80ea-be1ad2824ac1\") " pod="openstack/barbican-keystone-listener-6b4bfb45d6-658rw" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.671485 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b895b5785-rtfwj"] Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.681405 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-rtfwj" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.688072 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-664579f9ff-tk2xr" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.728628 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-rtfwj"] Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.757631 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.760525 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.790735 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-55npv" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.794169 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.794243 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-rtfwj\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " pod="openstack/dnsmasq-dns-b895b5785-rtfwj" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.794339 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crdsf\" (UniqueName: \"kubernetes.io/projected/f8304221-9734-4385-80ea-be1ad2824ac1-kube-api-access-crdsf\") pod \"barbican-keystone-listener-6b4bfb45d6-658rw\" (UID: \"f8304221-9734-4385-80ea-be1ad2824ac1\") " pod="openstack/barbican-keystone-listener-6b4bfb45d6-658rw" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.794364 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6cd37a89-313f-4873-8ad6-a601101d75d8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.794386 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-dns-svc\") pod \"dnsmasq-dns-b895b5785-rtfwj\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " pod="openstack/dnsmasq-dns-b895b5785-rtfwj" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.794439 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.794487 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-config\") pod \"dnsmasq-dns-b895b5785-rtfwj\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " pod="openstack/dnsmasq-dns-b895b5785-rtfwj" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.794551 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8304221-9734-4385-80ea-be1ad2824ac1-config-data\") pod \"barbican-keystone-listener-6b4bfb45d6-658rw\" (UID: \"f8304221-9734-4385-80ea-be1ad2824ac1\") " pod="openstack/barbican-keystone-listener-6b4bfb45d6-658rw" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.794605 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-rtfwj\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " pod="openstack/dnsmasq-dns-b895b5785-rtfwj" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.794634 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-config-data\") pod \"cinder-scheduler-0\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.794753 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsntk\" (UniqueName: \"kubernetes.io/projected/6cd37a89-313f-4873-8ad6-a601101d75d8-kube-api-access-bsntk\") pod \"cinder-scheduler-0\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.794799 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-scripts\") pod \"cinder-scheduler-0\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.794822 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-rtfwj\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " pod="openstack/dnsmasq-dns-b895b5785-rtfwj" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.794872 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8304221-9734-4385-80ea-be1ad2824ac1-combined-ca-bundle\") pod \"barbican-keystone-listener-6b4bfb45d6-658rw\" (UID: \"f8304221-9734-4385-80ea-be1ad2824ac1\") " pod="openstack/barbican-keystone-listener-6b4bfb45d6-658rw" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.794917 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8304221-9734-4385-80ea-be1ad2824ac1-config-data-custom\") pod \"barbican-keystone-listener-6b4bfb45d6-658rw\" (UID: \"f8304221-9734-4385-80ea-be1ad2824ac1\") " pod="openstack/barbican-keystone-listener-6b4bfb45d6-658rw" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.795002 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8304221-9734-4385-80ea-be1ad2824ac1-logs\") pod \"barbican-keystone-listener-6b4bfb45d6-658rw\" (UID: \"f8304221-9734-4385-80ea-be1ad2824ac1\") " pod="openstack/barbican-keystone-listener-6b4bfb45d6-658rw" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.795038 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqcsr\" (UniqueName: \"kubernetes.io/projected/e471c195-d048-4f29-82c9-7d310766926c-kube-api-access-jqcsr\") pod \"dnsmasq-dns-b895b5785-rtfwj\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " pod="openstack/dnsmasq-dns-b895b5785-rtfwj" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.799894 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.802601 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8304221-9734-4385-80ea-be1ad2824ac1-logs\") pod \"barbican-keystone-listener-6b4bfb45d6-658rw\" (UID: \"f8304221-9734-4385-80ea-be1ad2824ac1\") " pod="openstack/barbican-keystone-listener-6b4bfb45d6-658rw" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.803796 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.806755 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.806967 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.814942 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8304221-9734-4385-80ea-be1ad2824ac1-combined-ca-bundle\") pod \"barbican-keystone-listener-6b4bfb45d6-658rw\" (UID: \"f8304221-9734-4385-80ea-be1ad2824ac1\") " pod="openstack/barbican-keystone-listener-6b4bfb45d6-658rw" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.816866 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-rtfwj"] Feb 02 13:19:26 crc kubenswrapper[4955]: E0202 13:19:26.817910 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-jqcsr ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-b895b5785-rtfwj" podUID="e471c195-d048-4f29-82c9-7d310766926c" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.825483 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8304221-9734-4385-80ea-be1ad2824ac1-config-data\") pod \"barbican-keystone-listener-6b4bfb45d6-658rw\" (UID: \"f8304221-9734-4385-80ea-be1ad2824ac1\") " pod="openstack/barbican-keystone-listener-6b4bfb45d6-658rw" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.830319 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5556dd9bdb-vvgs6"] Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.832871 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5556dd9bdb-vvgs6" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.836598 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.836634 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8304221-9734-4385-80ea-be1ad2824ac1-config-data-custom\") pod \"barbican-keystone-listener-6b4bfb45d6-658rw\" (UID: \"f8304221-9734-4385-80ea-be1ad2824ac1\") " pod="openstack/barbican-keystone-listener-6b4bfb45d6-658rw" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.844519 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crdsf\" (UniqueName: \"kubernetes.io/projected/f8304221-9734-4385-80ea-be1ad2824ac1-kube-api-access-crdsf\") pod \"barbican-keystone-listener-6b4bfb45d6-658rw\" (UID: \"f8304221-9734-4385-80ea-be1ad2824ac1\") " pod="openstack/barbican-keystone-listener-6b4bfb45d6-658rw" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.859141 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-trw5r"] Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.860816 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.869665 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5556dd9bdb-vvgs6"] Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.888587 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-trw5r"] Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.896347 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.896633 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbd94f38-a41d-4069-976f-fb347698edd6-config-data\") pod \"barbican-api-5556dd9bdb-vvgs6\" (UID: \"dbd94f38-a41d-4069-976f-fb347698edd6\") " pod="openstack/barbican-api-5556dd9bdb-vvgs6" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.896765 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-config\") pod \"dnsmasq-dns-b895b5785-rtfwj\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " pod="openstack/dnsmasq-dns-b895b5785-rtfwj" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.896863 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-rtfwj\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " pod="openstack/dnsmasq-dns-b895b5785-rtfwj" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.897014 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-config-data\") pod \"cinder-scheduler-0\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.897127 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-trw5r\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.897225 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc57c\" (UniqueName: \"kubernetes.io/projected/6c56868b-3c42-436b-aa99-89edb4701754-kube-api-access-fc57c\") pod \"dnsmasq-dns-5c9776ccc5-trw5r\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.897319 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsntk\" (UniqueName: \"kubernetes.io/projected/6cd37a89-313f-4873-8ad6-a601101d75d8-kube-api-access-bsntk\") pod \"cinder-scheduler-0\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.897392 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-trw5r\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.897476 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-scripts\") pod \"cinder-scheduler-0\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.897567 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-rtfwj\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " pod="openstack/dnsmasq-dns-b895b5785-rtfwj" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.897680 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l2nj\" (UniqueName: \"kubernetes.io/projected/dbd94f38-a41d-4069-976f-fb347698edd6-kube-api-access-7l2nj\") pod \"barbican-api-5556dd9bdb-vvgs6\" (UID: \"dbd94f38-a41d-4069-976f-fb347698edd6\") " pod="openstack/barbican-api-5556dd9bdb-vvgs6" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.897761 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-trw5r\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.897882 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqcsr\" (UniqueName: \"kubernetes.io/projected/e471c195-d048-4f29-82c9-7d310766926c-kube-api-access-jqcsr\") pod \"dnsmasq-dns-b895b5785-rtfwj\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " pod="openstack/dnsmasq-dns-b895b5785-rtfwj" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.897938 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-config\") pod \"dnsmasq-dns-b895b5785-rtfwj\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " pod="openstack/dnsmasq-dns-b895b5785-rtfwj" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.898897 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-rtfwj\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " pod="openstack/dnsmasq-dns-b895b5785-rtfwj" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.900216 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.901712 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-scripts\") pod \"cinder-scheduler-0\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.902167 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.908169 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-config-data\") pod \"cinder-scheduler-0\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.910936 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbd94f38-a41d-4069-976f-fb347698edd6-config-data-custom\") pod \"barbican-api-5556dd9bdb-vvgs6\" (UID: \"dbd94f38-a41d-4069-976f-fb347698edd6\") " pod="openstack/barbican-api-5556dd9bdb-vvgs6" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.911013 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-trw5r\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.911065 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.911101 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-rtfwj\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " pod="openstack/dnsmasq-dns-b895b5785-rtfwj" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.911164 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-config\") pod \"dnsmasq-dns-5c9776ccc5-trw5r\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.911227 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-dns-svc\") pod \"dnsmasq-dns-b895b5785-rtfwj\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " pod="openstack/dnsmasq-dns-b895b5785-rtfwj" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.911247 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbd94f38-a41d-4069-976f-fb347698edd6-combined-ca-bundle\") pod \"barbican-api-5556dd9bdb-vvgs6\" (UID: \"dbd94f38-a41d-4069-976f-fb347698edd6\") " pod="openstack/barbican-api-5556dd9bdb-vvgs6" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.911271 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6cd37a89-313f-4873-8ad6-a601101d75d8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.911306 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbd94f38-a41d-4069-976f-fb347698edd6-logs\") pod \"barbican-api-5556dd9bdb-vvgs6\" (UID: \"dbd94f38-a41d-4069-976f-fb347698edd6\") " pod="openstack/barbican-api-5556dd9bdb-vvgs6" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.912580 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.913568 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-rtfwj\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " pod="openstack/dnsmasq-dns-b895b5785-rtfwj" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.913649 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6cd37a89-313f-4873-8ad6-a601101d75d8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.914015 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-dns-svc\") pod \"dnsmasq-dns-b895b5785-rtfwj\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " pod="openstack/dnsmasq-dns-b895b5785-rtfwj" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.915971 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.915991 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.917220 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsntk\" (UniqueName: \"kubernetes.io/projected/6cd37a89-313f-4873-8ad6-a601101d75d8-kube-api-access-bsntk\") pod \"cinder-scheduler-0\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.921503 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-rtfwj\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " pod="openstack/dnsmasq-dns-b895b5785-rtfwj" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.924607 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.926038 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqcsr\" (UniqueName: \"kubernetes.io/projected/e471c195-d048-4f29-82c9-7d310766926c-kube-api-access-jqcsr\") pod \"dnsmasq-dns-b895b5785-rtfwj\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " pod="openstack/dnsmasq-dns-b895b5785-rtfwj" Feb 02 13:19:26 crc kubenswrapper[4955]: I0202 13:19:26.988811 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b4bfb45d6-658rw" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.012534 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l2nj\" (UniqueName: \"kubernetes.io/projected/dbd94f38-a41d-4069-976f-fb347698edd6-kube-api-access-7l2nj\") pod \"barbican-api-5556dd9bdb-vvgs6\" (UID: \"dbd94f38-a41d-4069-976f-fb347698edd6\") " pod="openstack/barbican-api-5556dd9bdb-vvgs6" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.012620 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-trw5r\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.012697 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-config-data\") pod \"cinder-api-0\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " pod="openstack/cinder-api-0" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.012726 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa13bc5c-688d-4dfc-a1b6-da5214be9266-logs\") pod \"cinder-api-0\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " pod="openstack/cinder-api-0" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.012747 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-scripts\") pod \"cinder-api-0\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " pod="openstack/cinder-api-0" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.012773 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aa13bc5c-688d-4dfc-a1b6-da5214be9266-etc-machine-id\") pod \"cinder-api-0\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " pod="openstack/cinder-api-0" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.012798 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbd94f38-a41d-4069-976f-fb347698edd6-config-data-custom\") pod \"barbican-api-5556dd9bdb-vvgs6\" (UID: \"dbd94f38-a41d-4069-976f-fb347698edd6\") " pod="openstack/barbican-api-5556dd9bdb-vvgs6" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.012836 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-trw5r\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.012890 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-config\") pod \"dnsmasq-dns-5c9776ccc5-trw5r\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.012924 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-config-data-custom\") pod \"cinder-api-0\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " pod="openstack/cinder-api-0" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.012955 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbd94f38-a41d-4069-976f-fb347698edd6-combined-ca-bundle\") pod \"barbican-api-5556dd9bdb-vvgs6\" (UID: \"dbd94f38-a41d-4069-976f-fb347698edd6\") " pod="openstack/barbican-api-5556dd9bdb-vvgs6" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.012980 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbd94f38-a41d-4069-976f-fb347698edd6-logs\") pod \"barbican-api-5556dd9bdb-vvgs6\" (UID: \"dbd94f38-a41d-4069-976f-fb347698edd6\") " pod="openstack/barbican-api-5556dd9bdb-vvgs6" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.013040 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbd94f38-a41d-4069-976f-fb347698edd6-config-data\") pod \"barbican-api-5556dd9bdb-vvgs6\" (UID: \"dbd94f38-a41d-4069-976f-fb347698edd6\") " pod="openstack/barbican-api-5556dd9bdb-vvgs6" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.013066 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9r49\" (UniqueName: \"kubernetes.io/projected/aa13bc5c-688d-4dfc-a1b6-da5214be9266-kube-api-access-p9r49\") pod \"cinder-api-0\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " pod="openstack/cinder-api-0" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.013141 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " pod="openstack/cinder-api-0" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.013189 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-trw5r\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.013215 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc57c\" (UniqueName: \"kubernetes.io/projected/6c56868b-3c42-436b-aa99-89edb4701754-kube-api-access-fc57c\") pod \"dnsmasq-dns-5c9776ccc5-trw5r\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.013258 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-trw5r\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.014424 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-config\") pod \"dnsmasq-dns-5c9776ccc5-trw5r\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.014889 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-trw5r\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.015620 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-trw5r\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.018178 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbd94f38-a41d-4069-976f-fb347698edd6-logs\") pod \"barbican-api-5556dd9bdb-vvgs6\" (UID: \"dbd94f38-a41d-4069-976f-fb347698edd6\") " pod="openstack/barbican-api-5556dd9bdb-vvgs6" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.018931 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-trw5r\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.018960 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-trw5r\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.020616 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbd94f38-a41d-4069-976f-fb347698edd6-combined-ca-bundle\") pod \"barbican-api-5556dd9bdb-vvgs6\" (UID: \"dbd94f38-a41d-4069-976f-fb347698edd6\") " pod="openstack/barbican-api-5556dd9bdb-vvgs6" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.027716 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbd94f38-a41d-4069-976f-fb347698edd6-config-data\") pod \"barbican-api-5556dd9bdb-vvgs6\" (UID: \"dbd94f38-a41d-4069-976f-fb347698edd6\") " pod="openstack/barbican-api-5556dd9bdb-vvgs6" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.038482 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbd94f38-a41d-4069-976f-fb347698edd6-config-data-custom\") pod \"barbican-api-5556dd9bdb-vvgs6\" (UID: \"dbd94f38-a41d-4069-976f-fb347698edd6\") " pod="openstack/barbican-api-5556dd9bdb-vvgs6" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.051518 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc57c\" (UniqueName: \"kubernetes.io/projected/6c56868b-3c42-436b-aa99-89edb4701754-kube-api-access-fc57c\") pod \"dnsmasq-dns-5c9776ccc5-trw5r\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.053628 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l2nj\" (UniqueName: \"kubernetes.io/projected/dbd94f38-a41d-4069-976f-fb347698edd6-kube-api-access-7l2nj\") pod \"barbican-api-5556dd9bdb-vvgs6\" (UID: \"dbd94f38-a41d-4069-976f-fb347698edd6\") " pod="openstack/barbican-api-5556dd9bdb-vvgs6" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.119891 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-config-data\") pod \"cinder-api-0\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " pod="openstack/cinder-api-0" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.119950 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa13bc5c-688d-4dfc-a1b6-da5214be9266-logs\") pod \"cinder-api-0\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " pod="openstack/cinder-api-0" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.120001 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-scripts\") pod \"cinder-api-0\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " pod="openstack/cinder-api-0" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.120032 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aa13bc5c-688d-4dfc-a1b6-da5214be9266-etc-machine-id\") pod \"cinder-api-0\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " pod="openstack/cinder-api-0" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.120106 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aa13bc5c-688d-4dfc-a1b6-da5214be9266-etc-machine-id\") pod \"cinder-api-0\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " pod="openstack/cinder-api-0" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.120313 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-config-data-custom\") pod \"cinder-api-0\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " pod="openstack/cinder-api-0" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.120460 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9r49\" (UniqueName: \"kubernetes.io/projected/aa13bc5c-688d-4dfc-a1b6-da5214be9266-kube-api-access-p9r49\") pod \"cinder-api-0\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " pod="openstack/cinder-api-0" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.120766 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " pod="openstack/cinder-api-0" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.121889 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa13bc5c-688d-4dfc-a1b6-da5214be9266-logs\") pod \"cinder-api-0\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " pod="openstack/cinder-api-0" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.126512 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-config-data\") pod \"cinder-api-0\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " pod="openstack/cinder-api-0" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.129951 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " pod="openstack/cinder-api-0" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.137348 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-config-data-custom\") pod \"cinder-api-0\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " pod="openstack/cinder-api-0" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.146994 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-scripts\") pod \"cinder-api-0\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " pod="openstack/cinder-api-0" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.151498 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-rtfwj" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.167876 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9r49\" (UniqueName: \"kubernetes.io/projected/aa13bc5c-688d-4dfc-a1b6-da5214be9266-kube-api-access-p9r49\") pod \"cinder-api-0\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " pod="openstack/cinder-api-0" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.172408 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-rtfwj" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.190033 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.222382 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5556dd9bdb-vvgs6" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.222742 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-dns-swift-storage-0\") pod \"e471c195-d048-4f29-82c9-7d310766926c\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.222955 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqcsr\" (UniqueName: \"kubernetes.io/projected/e471c195-d048-4f29-82c9-7d310766926c-kube-api-access-jqcsr\") pod \"e471c195-d048-4f29-82c9-7d310766926c\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.223048 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-ovsdbserver-nb\") pod \"e471c195-d048-4f29-82c9-7d310766926c\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.223071 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-config\") pod \"e471c195-d048-4f29-82c9-7d310766926c\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.223104 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-dns-svc\") pod \"e471c195-d048-4f29-82c9-7d310766926c\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.223135 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-ovsdbserver-sb\") pod \"e471c195-d048-4f29-82c9-7d310766926c\" (UID: \"e471c195-d048-4f29-82c9-7d310766926c\") " Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.224399 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e471c195-d048-4f29-82c9-7d310766926c" (UID: "e471c195-d048-4f29-82c9-7d310766926c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.225364 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e471c195-d048-4f29-82c9-7d310766926c" (UID: "e471c195-d048-4f29-82c9-7d310766926c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.227872 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e471c195-d048-4f29-82c9-7d310766926c" (UID: "e471c195-d048-4f29-82c9-7d310766926c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.234342 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e471c195-d048-4f29-82c9-7d310766926c" (UID: "e471c195-d048-4f29-82c9-7d310766926c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.234483 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e471c195-d048-4f29-82c9-7d310766926c-kube-api-access-jqcsr" (OuterVolumeSpecName: "kube-api-access-jqcsr") pod "e471c195-d048-4f29-82c9-7d310766926c" (UID: "e471c195-d048-4f29-82c9-7d310766926c"). InnerVolumeSpecName "kube-api-access-jqcsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.234639 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-config" (OuterVolumeSpecName: "config") pod "e471c195-d048-4f29-82c9-7d310766926c" (UID: "e471c195-d048-4f29-82c9-7d310766926c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.318494 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.335398 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.335431 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.335442 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.335453 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.335464 4955 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e471c195-d048-4f29-82c9-7d310766926c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.335488 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqcsr\" (UniqueName: \"kubernetes.io/projected/e471c195-d048-4f29-82c9-7d310766926c-kube-api-access-jqcsr\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.379436 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:19:27 crc kubenswrapper[4955]: W0202 13:19:27.393922 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06f5162a_d775_4658_b6a3_10e528720bcf.slice/crio-c2b2f36ec693d26f0e87bc9e4e7356b2e071acb825520796b36ede82b66e0305 WatchSource:0}: Error finding container c2b2f36ec693d26f0e87bc9e4e7356b2e071acb825520796b36ede82b66e0305: Status 404 returned error can't find the container with id c2b2f36ec693d26f0e87bc9e4e7356b2e071acb825520796b36ede82b66e0305 Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.442177 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.493680 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-664579f9ff-tk2xr"] Feb 02 13:19:27 crc kubenswrapper[4955]: W0202 13:19:27.508270 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb940320a_acf4_4bf3_88b4_00a1689be1c5.slice/crio-60fc6dd4c7bbe725384288932395a25d440e0de513a1863668fc6a02f91a5762 WatchSource:0}: Error finding container 60fc6dd4c7bbe725384288932395a25d440e0de513a1863668fc6a02f91a5762: Status 404 returned error can't find the container with id 60fc6dd4c7bbe725384288932395a25d440e0de513a1863668fc6a02f91a5762 Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.715090 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b4bfb45d6-658rw"] Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.733087 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a510324b-16f3-4585-abc9-ae66997c2987" path="/var/lib/kubelet/pods/a510324b-16f3-4585-abc9-ae66997c2987/volumes" Feb 02 13:19:27 crc kubenswrapper[4955]: I0202 13:19:27.798312 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 13:19:27 crc kubenswrapper[4955]: W0202 13:19:27.805660 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cd37a89_313f_4873_8ad6_a601101d75d8.slice/crio-9d037e9225ae894ae78d9d629d09927da66576a60415720dc022303b02126f42 WatchSource:0}: Error finding container 9d037e9225ae894ae78d9d629d09927da66576a60415720dc022303b02126f42: Status 404 returned error can't find the container with id 9d037e9225ae894ae78d9d629d09927da66576a60415720dc022303b02126f42 Feb 02 13:19:28 crc kubenswrapper[4955]: I0202 13:19:28.011323 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5556dd9bdb-vvgs6"] Feb 02 13:19:28 crc kubenswrapper[4955]: I0202 13:19:28.035977 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-trw5r"] Feb 02 13:19:28 crc kubenswrapper[4955]: W0202 13:19:28.066117 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbd94f38_a41d_4069_976f_fb347698edd6.slice/crio-dee0009f92f6c4015a2d7b481e9d713ae051de32b6b6e68874a77018cc5cced1 WatchSource:0}: Error finding container dee0009f92f6c4015a2d7b481e9d713ae051de32b6b6e68874a77018cc5cced1: Status 404 returned error can't find the container with id dee0009f92f6c4015a2d7b481e9d713ae051de32b6b6e68874a77018cc5cced1 Feb 02 13:19:28 crc kubenswrapper[4955]: I0202 13:19:28.185063 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 13:19:28 crc kubenswrapper[4955]: I0202 13:19:28.247774 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6cd37a89-313f-4873-8ad6-a601101d75d8","Type":"ContainerStarted","Data":"9d037e9225ae894ae78d9d629d09927da66576a60415720dc022303b02126f42"} Feb 02 13:19:28 crc kubenswrapper[4955]: I0202 13:19:28.251730 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b4bfb45d6-658rw" event={"ID":"f8304221-9734-4385-80ea-be1ad2824ac1","Type":"ContainerStarted","Data":"361adc024dbbf2ca12a2cf7a2feec9107241198d13babc4b55b7545abbb299e2"} Feb 02 13:19:28 crc kubenswrapper[4955]: I0202 13:19:28.253770 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-664579f9ff-tk2xr" event={"ID":"b940320a-acf4-4bf3-88b4-00a1689be1c5","Type":"ContainerStarted","Data":"60fc6dd4c7bbe725384288932395a25d440e0de513a1863668fc6a02f91a5762"} Feb 02 13:19:28 crc kubenswrapper[4955]: I0202 13:19:28.256973 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f5162a-d775-4658-b6a3-10e528720bcf","Type":"ContainerStarted","Data":"43dc5f0a7f8f526a75705c4738723e20ae7abcebe886c620a7ad5f69ee7296a3"} Feb 02 13:19:28 crc kubenswrapper[4955]: I0202 13:19:28.257018 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f5162a-d775-4658-b6a3-10e528720bcf","Type":"ContainerStarted","Data":"c2b2f36ec693d26f0e87bc9e4e7356b2e071acb825520796b36ede82b66e0305"} Feb 02 13:19:28 crc kubenswrapper[4955]: I0202 13:19:28.258391 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" event={"ID":"6c56868b-3c42-436b-aa99-89edb4701754","Type":"ContainerStarted","Data":"e72ff40a9ae50f0bf4c57e35f7e6369c0621f7c8c6c4a8edd5cee71a94c3d650"} Feb 02 13:19:28 crc kubenswrapper[4955]: I0202 13:19:28.260124 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5556dd9bdb-vvgs6" event={"ID":"dbd94f38-a41d-4069-976f-fb347698edd6","Type":"ContainerStarted","Data":"dee0009f92f6c4015a2d7b481e9d713ae051de32b6b6e68874a77018cc5cced1"} Feb 02 13:19:28 crc kubenswrapper[4955]: I0202 13:19:28.260178 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-rtfwj" Feb 02 13:19:28 crc kubenswrapper[4955]: I0202 13:19:28.367985 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-rtfwj"] Feb 02 13:19:28 crc kubenswrapper[4955]: I0202 13:19:28.373668 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-rtfwj"] Feb 02 13:19:28 crc kubenswrapper[4955]: I0202 13:19:28.740318 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 13:19:29 crc kubenswrapper[4955]: I0202 13:19:29.270720 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aa13bc5c-688d-4dfc-a1b6-da5214be9266","Type":"ContainerStarted","Data":"66c1b4bcced65fc2872a2aa57fc7c9d1e6cde4acd509de9a74a06ae9ab41882f"} Feb 02 13:19:29 crc kubenswrapper[4955]: I0202 13:19:29.271070 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aa13bc5c-688d-4dfc-a1b6-da5214be9266","Type":"ContainerStarted","Data":"154b9f0c9581dee3ecaade0ecf59b2db153aef3dce44a9dfa1f37f497dcec803"} Feb 02 13:19:29 crc kubenswrapper[4955]: I0202 13:19:29.272660 4955 generic.go:334] "Generic (PLEG): container finished" podID="6c56868b-3c42-436b-aa99-89edb4701754" containerID="ee558c3f627276eb2b293236ce8ecad441d2f566a713f4513332f41536ff8516" exitCode=0 Feb 02 13:19:29 crc kubenswrapper[4955]: I0202 13:19:29.272727 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" event={"ID":"6c56868b-3c42-436b-aa99-89edb4701754","Type":"ContainerDied","Data":"ee558c3f627276eb2b293236ce8ecad441d2f566a713f4513332f41536ff8516"} Feb 02 13:19:29 crc kubenswrapper[4955]: I0202 13:19:29.277612 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5556dd9bdb-vvgs6" event={"ID":"dbd94f38-a41d-4069-976f-fb347698edd6","Type":"ContainerStarted","Data":"3dd62fe38f541578e6715632c5c28cffee451192a6e233d2e840bf1af503efa0"} Feb 02 13:19:29 crc kubenswrapper[4955]: I0202 13:19:29.277681 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5556dd9bdb-vvgs6" event={"ID":"dbd94f38-a41d-4069-976f-fb347698edd6","Type":"ContainerStarted","Data":"9720d1c591b1dec49c82c853629bc29ea23ffa86cde4c173ef4428da55747d38"} Feb 02 13:19:29 crc kubenswrapper[4955]: I0202 13:19:29.277873 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5556dd9bdb-vvgs6" Feb 02 13:19:29 crc kubenswrapper[4955]: I0202 13:19:29.277998 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5556dd9bdb-vvgs6" Feb 02 13:19:29 crc kubenswrapper[4955]: I0202 13:19:29.319199 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5556dd9bdb-vvgs6" podStartSLOduration=3.319153662 podStartE2EDuration="3.319153662s" podCreationTimestamp="2026-02-02 13:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:19:29.310015711 +0000 UTC m=+1020.222352161" watchObservedRunningTime="2026-02-02 13:19:29.319153662 +0000 UTC m=+1020.231490112" Feb 02 13:19:29 crc kubenswrapper[4955]: I0202 13:19:29.733492 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e471c195-d048-4f29-82c9-7d310766926c" path="/var/lib/kubelet/pods/e471c195-d048-4f29-82c9-7d310766926c/volumes" Feb 02 13:19:30 crc kubenswrapper[4955]: I0202 13:19:30.297483 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-664579f9ff-tk2xr" event={"ID":"b940320a-acf4-4bf3-88b4-00a1689be1c5","Type":"ContainerStarted","Data":"68b09442905fd0e71262675b073111fe7cd3b5e0b3601296f0b2218cb0370431"} Feb 02 13:19:30 crc kubenswrapper[4955]: I0202 13:19:30.303249 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f5162a-d775-4658-b6a3-10e528720bcf","Type":"ContainerStarted","Data":"40fa117856d5155f71e264fade9f24cd4016b98b2e636e823597d994d4468d7c"} Feb 02 13:19:30 crc kubenswrapper[4955]: I0202 13:19:30.305459 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" event={"ID":"6c56868b-3c42-436b-aa99-89edb4701754","Type":"ContainerStarted","Data":"e4b32a17a6f8095dc3b112a5c1b7a3419f71a9788828ead6b4227f526e647c32"} Feb 02 13:19:30 crc kubenswrapper[4955]: I0202 13:19:30.306718 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:19:30 crc kubenswrapper[4955]: I0202 13:19:30.309995 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b4bfb45d6-658rw" event={"ID":"f8304221-9734-4385-80ea-be1ad2824ac1","Type":"ContainerStarted","Data":"9928668653eb3e32481bc22f26ed4c1b5b30c110207ef41bc775243f20a53d36"} Feb 02 13:19:30 crc kubenswrapper[4955]: I0202 13:19:30.341825 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" podStartSLOduration=4.341806159 podStartE2EDuration="4.341806159s" podCreationTimestamp="2026-02-02 13:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:19:30.322404852 +0000 UTC m=+1021.234741322" watchObservedRunningTime="2026-02-02 13:19:30.341806159 +0000 UTC m=+1021.254142609" Feb 02 13:19:31 crc kubenswrapper[4955]: I0202 13:19:31.322778 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6cd37a89-313f-4873-8ad6-a601101d75d8","Type":"ContainerStarted","Data":"2fb864a4da64ccb166c922abd4baf178ccfbd3f2e554447dfb2893b7ca28a19d"} Feb 02 13:19:31 crc kubenswrapper[4955]: I0202 13:19:31.323338 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6cd37a89-313f-4873-8ad6-a601101d75d8","Type":"ContainerStarted","Data":"0d08235f8fd3af8c5c63681f9af382bb45c4dc806fded7b6b0b43e422bcef8aa"} Feb 02 13:19:31 crc kubenswrapper[4955]: I0202 13:19:31.326735 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b4bfb45d6-658rw" event={"ID":"f8304221-9734-4385-80ea-be1ad2824ac1","Type":"ContainerStarted","Data":"03687acc5a99160d42b1f34c57087716f52208cfd6e25f867ced3c133d88cec9"} Feb 02 13:19:31 crc kubenswrapper[4955]: I0202 13:19:31.328429 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aa13bc5c-688d-4dfc-a1b6-da5214be9266","Type":"ContainerStarted","Data":"fa859d703dcab91225c9b12ac96e31e0fafeec0913a7294fa530276ec370d9f9"} Feb 02 13:19:31 crc kubenswrapper[4955]: I0202 13:19:31.328528 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="aa13bc5c-688d-4dfc-a1b6-da5214be9266" containerName="cinder-api-log" containerID="cri-o://66c1b4bcced65fc2872a2aa57fc7c9d1e6cde4acd509de9a74a06ae9ab41882f" gracePeriod=30 Feb 02 13:19:31 crc kubenswrapper[4955]: I0202 13:19:31.328581 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 13:19:31 crc kubenswrapper[4955]: I0202 13:19:31.328596 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="aa13bc5c-688d-4dfc-a1b6-da5214be9266" containerName="cinder-api" containerID="cri-o://fa859d703dcab91225c9b12ac96e31e0fafeec0913a7294fa530276ec370d9f9" gracePeriod=30 Feb 02 13:19:31 crc kubenswrapper[4955]: I0202 13:19:31.332699 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-664579f9ff-tk2xr" event={"ID":"b940320a-acf4-4bf3-88b4-00a1689be1c5","Type":"ContainerStarted","Data":"2db72ceae9db89f09ca6f6a11467ded379d9919361103642512d042712a666d4"} Feb 02 13:19:31 crc kubenswrapper[4955]: I0202 13:19:31.338085 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f5162a-d775-4658-b6a3-10e528720bcf","Type":"ContainerStarted","Data":"b98cf6590b4e80b3df7715ddda4870ff3a8e18b54ca12d03c80affd54b2b09d6"} Feb 02 13:19:31 crc kubenswrapper[4955]: I0202 13:19:31.353245 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.845584826 podStartE2EDuration="5.353230528s" podCreationTimestamp="2026-02-02 13:19:26 +0000 UTC" firstStartedPulling="2026-02-02 13:19:27.808762723 +0000 UTC m=+1018.721099173" lastFinishedPulling="2026-02-02 13:19:29.316408425 +0000 UTC m=+1020.228744875" observedRunningTime="2026-02-02 13:19:31.352639553 +0000 UTC m=+1022.264976013" watchObservedRunningTime="2026-02-02 13:19:31.353230528 +0000 UTC m=+1022.265566978" Feb 02 13:19:31 crc kubenswrapper[4955]: I0202 13:19:31.374479 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-664579f9ff-tk2xr" podStartSLOduration=3.223094745 podStartE2EDuration="5.374457139s" podCreationTimestamp="2026-02-02 13:19:26 +0000 UTC" firstStartedPulling="2026-02-02 13:19:27.520854041 +0000 UTC m=+1018.433190491" lastFinishedPulling="2026-02-02 13:19:29.672216435 +0000 UTC m=+1020.584552885" observedRunningTime="2026-02-02 13:19:31.367682986 +0000 UTC m=+1022.280019436" watchObservedRunningTime="2026-02-02 13:19:31.374457139 +0000 UTC m=+1022.286793589" Feb 02 13:19:31 crc kubenswrapper[4955]: I0202 13:19:31.389996 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.389975143 podStartE2EDuration="5.389975143s" podCreationTimestamp="2026-02-02 13:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:19:31.385116456 +0000 UTC m=+1022.297452906" watchObservedRunningTime="2026-02-02 13:19:31.389975143 +0000 UTC m=+1022.302311593" Feb 02 13:19:31 crc kubenswrapper[4955]: I0202 13:19:31.418788 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6b4bfb45d6-658rw" podStartSLOduration=3.450342705 podStartE2EDuration="5.418770108s" podCreationTimestamp="2026-02-02 13:19:26 +0000 UTC" firstStartedPulling="2026-02-02 13:19:27.744188196 +0000 UTC m=+1018.656524646" lastFinishedPulling="2026-02-02 13:19:29.712615599 +0000 UTC m=+1020.624952049" observedRunningTime="2026-02-02 13:19:31.408215483 +0000 UTC m=+1022.320551943" watchObservedRunningTime="2026-02-02 13:19:31.418770108 +0000 UTC m=+1022.331106558" Feb 02 13:19:31 crc kubenswrapper[4955]: I0202 13:19:31.978463 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.059472 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-scripts\") pod \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.059591 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa13bc5c-688d-4dfc-a1b6-da5214be9266-logs\") pod \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.059660 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aa13bc5c-688d-4dfc-a1b6-da5214be9266-etc-machine-id\") pod \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.059772 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-combined-ca-bundle\") pod \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.059798 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-config-data\") pod \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.059816 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-config-data-custom\") pod \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.059860 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9r49\" (UniqueName: \"kubernetes.io/projected/aa13bc5c-688d-4dfc-a1b6-da5214be9266-kube-api-access-p9r49\") pod \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\" (UID: \"aa13bc5c-688d-4dfc-a1b6-da5214be9266\") " Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.060274 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa13bc5c-688d-4dfc-a1b6-da5214be9266-logs" (OuterVolumeSpecName: "logs") pod "aa13bc5c-688d-4dfc-a1b6-da5214be9266" (UID: "aa13bc5c-688d-4dfc-a1b6-da5214be9266"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.060728 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa13bc5c-688d-4dfc-a1b6-da5214be9266-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "aa13bc5c-688d-4dfc-a1b6-da5214be9266" (UID: "aa13bc5c-688d-4dfc-a1b6-da5214be9266"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.066671 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "aa13bc5c-688d-4dfc-a1b6-da5214be9266" (UID: "aa13bc5c-688d-4dfc-a1b6-da5214be9266"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.067159 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-scripts" (OuterVolumeSpecName: "scripts") pod "aa13bc5c-688d-4dfc-a1b6-da5214be9266" (UID: "aa13bc5c-688d-4dfc-a1b6-da5214be9266"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.068802 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa13bc5c-688d-4dfc-a1b6-da5214be9266-kube-api-access-p9r49" (OuterVolumeSpecName: "kube-api-access-p9r49") pod "aa13bc5c-688d-4dfc-a1b6-da5214be9266" (UID: "aa13bc5c-688d-4dfc-a1b6-da5214be9266"). InnerVolumeSpecName "kube-api-access-p9r49". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.100156 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa13bc5c-688d-4dfc-a1b6-da5214be9266" (UID: "aa13bc5c-688d-4dfc-a1b6-da5214be9266"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.134484 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-config-data" (OuterVolumeSpecName: "config-data") pod "aa13bc5c-688d-4dfc-a1b6-da5214be9266" (UID: "aa13bc5c-688d-4dfc-a1b6-da5214be9266"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.162287 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9r49\" (UniqueName: \"kubernetes.io/projected/aa13bc5c-688d-4dfc-a1b6-da5214be9266-kube-api-access-p9r49\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.162624 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.162726 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa13bc5c-688d-4dfc-a1b6-da5214be9266-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.162851 4955 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aa13bc5c-688d-4dfc-a1b6-da5214be9266-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.162958 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.163051 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.163152 4955 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa13bc5c-688d-4dfc-a1b6-da5214be9266-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.190514 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.349378 4955 generic.go:334] "Generic (PLEG): container finished" podID="aa13bc5c-688d-4dfc-a1b6-da5214be9266" containerID="fa859d703dcab91225c9b12ac96e31e0fafeec0913a7294fa530276ec370d9f9" exitCode=0 Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.350525 4955 generic.go:334] "Generic (PLEG): container finished" podID="aa13bc5c-688d-4dfc-a1b6-da5214be9266" containerID="66c1b4bcced65fc2872a2aa57fc7c9d1e6cde4acd509de9a74a06ae9ab41882f" exitCode=143 Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.352394 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.352529 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aa13bc5c-688d-4dfc-a1b6-da5214be9266","Type":"ContainerDied","Data":"fa859d703dcab91225c9b12ac96e31e0fafeec0913a7294fa530276ec370d9f9"} Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.352596 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aa13bc5c-688d-4dfc-a1b6-da5214be9266","Type":"ContainerDied","Data":"66c1b4bcced65fc2872a2aa57fc7c9d1e6cde4acd509de9a74a06ae9ab41882f"} Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.352617 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aa13bc5c-688d-4dfc-a1b6-da5214be9266","Type":"ContainerDied","Data":"154b9f0c9581dee3ecaade0ecf59b2db153aef3dce44a9dfa1f37f497dcec803"} Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.352639 4955 scope.go:117] "RemoveContainer" containerID="fa859d703dcab91225c9b12ac96e31e0fafeec0913a7294fa530276ec370d9f9" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.378539 4955 scope.go:117] "RemoveContainer" containerID="66c1b4bcced65fc2872a2aa57fc7c9d1e6cde4acd509de9a74a06ae9ab41882f" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.403830 4955 scope.go:117] "RemoveContainer" containerID="fa859d703dcab91225c9b12ac96e31e0fafeec0913a7294fa530276ec370d9f9" Feb 02 13:19:32 crc kubenswrapper[4955]: E0202 13:19:32.404798 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa859d703dcab91225c9b12ac96e31e0fafeec0913a7294fa530276ec370d9f9\": container with ID starting with fa859d703dcab91225c9b12ac96e31e0fafeec0913a7294fa530276ec370d9f9 not found: ID does not exist" containerID="fa859d703dcab91225c9b12ac96e31e0fafeec0913a7294fa530276ec370d9f9" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.404915 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa859d703dcab91225c9b12ac96e31e0fafeec0913a7294fa530276ec370d9f9"} err="failed to get container status \"fa859d703dcab91225c9b12ac96e31e0fafeec0913a7294fa530276ec370d9f9\": rpc error: code = NotFound desc = could not find container \"fa859d703dcab91225c9b12ac96e31e0fafeec0913a7294fa530276ec370d9f9\": container with ID starting with fa859d703dcab91225c9b12ac96e31e0fafeec0913a7294fa530276ec370d9f9 not found: ID does not exist" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.405006 4955 scope.go:117] "RemoveContainer" containerID="66c1b4bcced65fc2872a2aa57fc7c9d1e6cde4acd509de9a74a06ae9ab41882f" Feb 02 13:19:32 crc kubenswrapper[4955]: E0202 13:19:32.405271 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66c1b4bcced65fc2872a2aa57fc7c9d1e6cde4acd509de9a74a06ae9ab41882f\": container with ID starting with 66c1b4bcced65fc2872a2aa57fc7c9d1e6cde4acd509de9a74a06ae9ab41882f not found: ID does not exist" containerID="66c1b4bcced65fc2872a2aa57fc7c9d1e6cde4acd509de9a74a06ae9ab41882f" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.405302 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66c1b4bcced65fc2872a2aa57fc7c9d1e6cde4acd509de9a74a06ae9ab41882f"} err="failed to get container status \"66c1b4bcced65fc2872a2aa57fc7c9d1e6cde4acd509de9a74a06ae9ab41882f\": rpc error: code = NotFound desc = could not find container \"66c1b4bcced65fc2872a2aa57fc7c9d1e6cde4acd509de9a74a06ae9ab41882f\": container with ID starting with 66c1b4bcced65fc2872a2aa57fc7c9d1e6cde4acd509de9a74a06ae9ab41882f not found: ID does not exist" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.405322 4955 scope.go:117] "RemoveContainer" containerID="fa859d703dcab91225c9b12ac96e31e0fafeec0913a7294fa530276ec370d9f9" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.408393 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa859d703dcab91225c9b12ac96e31e0fafeec0913a7294fa530276ec370d9f9"} err="failed to get container status \"fa859d703dcab91225c9b12ac96e31e0fafeec0913a7294fa530276ec370d9f9\": rpc error: code = NotFound desc = could not find container \"fa859d703dcab91225c9b12ac96e31e0fafeec0913a7294fa530276ec370d9f9\": container with ID starting with fa859d703dcab91225c9b12ac96e31e0fafeec0913a7294fa530276ec370d9f9 not found: ID does not exist" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.408767 4955 scope.go:117] "RemoveContainer" containerID="66c1b4bcced65fc2872a2aa57fc7c9d1e6cde4acd509de9a74a06ae9ab41882f" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.409404 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66c1b4bcced65fc2872a2aa57fc7c9d1e6cde4acd509de9a74a06ae9ab41882f"} err="failed to get container status \"66c1b4bcced65fc2872a2aa57fc7c9d1e6cde4acd509de9a74a06ae9ab41882f\": rpc error: code = NotFound desc = could not find container \"66c1b4bcced65fc2872a2aa57fc7c9d1e6cde4acd509de9a74a06ae9ab41882f\": container with ID starting with 66c1b4bcced65fc2872a2aa57fc7c9d1e6cde4acd509de9a74a06ae9ab41882f not found: ID does not exist" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.411861 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.421780 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.439034 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 13:19:32 crc kubenswrapper[4955]: E0202 13:19:32.439908 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa13bc5c-688d-4dfc-a1b6-da5214be9266" containerName="cinder-api" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.439938 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa13bc5c-688d-4dfc-a1b6-da5214be9266" containerName="cinder-api" Feb 02 13:19:32 crc kubenswrapper[4955]: E0202 13:19:32.439957 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa13bc5c-688d-4dfc-a1b6-da5214be9266" containerName="cinder-api-log" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.439966 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa13bc5c-688d-4dfc-a1b6-da5214be9266" containerName="cinder-api-log" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.440302 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa13bc5c-688d-4dfc-a1b6-da5214be9266" containerName="cinder-api-log" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.440326 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa13bc5c-688d-4dfc-a1b6-da5214be9266" containerName="cinder-api" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.441684 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.446065 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.446128 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.450010 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.469613 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.582034 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.582218 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.582290 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-scripts\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.582333 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.582493 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-config-data-custom\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.582736 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-config-data\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.582765 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p8qt\" (UniqueName: \"kubernetes.io/projected/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-kube-api-access-4p8qt\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.582894 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-logs\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.583009 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.684687 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.684730 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.684766 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-scripts\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.684782 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.684878 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-config-data-custom\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.684908 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-config-data\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.684933 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p8qt\" (UniqueName: \"kubernetes.io/projected/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-kube-api-access-4p8qt\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.684975 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-logs\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.685022 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.684873 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.685780 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-logs\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.696007 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-scripts\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.696249 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.704606 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.705404 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-config-data-custom\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.710087 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.710572 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p8qt\" (UniqueName: \"kubernetes.io/projected/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-kube-api-access-4p8qt\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.710861 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d89f97-f871-47b8-ac3f-e0f2e4a8242a-config-data\") pod \"cinder-api-0\" (UID: \"69d89f97-f871-47b8-ac3f-e0f2e4a8242a\") " pod="openstack/cinder-api-0" Feb 02 13:19:32 crc kubenswrapper[4955]: I0202 13:19:32.766935 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.261807 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.342295 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-78f69789f4-g5k2m"] Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.343973 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.352789 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.353135 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.363055 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78f69789f4-g5k2m"] Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.392925 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69d89f97-f871-47b8-ac3f-e0f2e4a8242a","Type":"ContainerStarted","Data":"d4182c6484e72b6ca9b7b4002f766d22fef7b899cb63b3731a7ef50fc02576c1"} Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.407173 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn558\" (UniqueName: \"kubernetes.io/projected/337dae88-5440-410a-8af7-1edfd336449f-kube-api-access-qn558\") pod \"barbican-api-78f69789f4-g5k2m\" (UID: \"337dae88-5440-410a-8af7-1edfd336449f\") " pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.407219 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/337dae88-5440-410a-8af7-1edfd336449f-internal-tls-certs\") pod \"barbican-api-78f69789f4-g5k2m\" (UID: \"337dae88-5440-410a-8af7-1edfd336449f\") " pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.407283 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/337dae88-5440-410a-8af7-1edfd336449f-logs\") pod \"barbican-api-78f69789f4-g5k2m\" (UID: \"337dae88-5440-410a-8af7-1edfd336449f\") " pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.407337 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/337dae88-5440-410a-8af7-1edfd336449f-combined-ca-bundle\") pod \"barbican-api-78f69789f4-g5k2m\" (UID: \"337dae88-5440-410a-8af7-1edfd336449f\") " pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.407360 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/337dae88-5440-410a-8af7-1edfd336449f-public-tls-certs\") pod \"barbican-api-78f69789f4-g5k2m\" (UID: \"337dae88-5440-410a-8af7-1edfd336449f\") " pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.407399 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/337dae88-5440-410a-8af7-1edfd336449f-config-data-custom\") pod \"barbican-api-78f69789f4-g5k2m\" (UID: \"337dae88-5440-410a-8af7-1edfd336449f\") " pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.407419 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/337dae88-5440-410a-8af7-1edfd336449f-config-data\") pod \"barbican-api-78f69789f4-g5k2m\" (UID: \"337dae88-5440-410a-8af7-1edfd336449f\") " pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.509513 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/337dae88-5440-410a-8af7-1edfd336449f-logs\") pod \"barbican-api-78f69789f4-g5k2m\" (UID: \"337dae88-5440-410a-8af7-1edfd336449f\") " pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.509636 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/337dae88-5440-410a-8af7-1edfd336449f-combined-ca-bundle\") pod \"barbican-api-78f69789f4-g5k2m\" (UID: \"337dae88-5440-410a-8af7-1edfd336449f\") " pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.509657 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/337dae88-5440-410a-8af7-1edfd336449f-public-tls-certs\") pod \"barbican-api-78f69789f4-g5k2m\" (UID: \"337dae88-5440-410a-8af7-1edfd336449f\") " pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.509679 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/337dae88-5440-410a-8af7-1edfd336449f-config-data-custom\") pod \"barbican-api-78f69789f4-g5k2m\" (UID: \"337dae88-5440-410a-8af7-1edfd336449f\") " pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.509698 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/337dae88-5440-410a-8af7-1edfd336449f-config-data\") pod \"barbican-api-78f69789f4-g5k2m\" (UID: \"337dae88-5440-410a-8af7-1edfd336449f\") " pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.509777 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn558\" (UniqueName: \"kubernetes.io/projected/337dae88-5440-410a-8af7-1edfd336449f-kube-api-access-qn558\") pod \"barbican-api-78f69789f4-g5k2m\" (UID: \"337dae88-5440-410a-8af7-1edfd336449f\") " pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.509804 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/337dae88-5440-410a-8af7-1edfd336449f-internal-tls-certs\") pod \"barbican-api-78f69789f4-g5k2m\" (UID: \"337dae88-5440-410a-8af7-1edfd336449f\") " pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.510844 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/337dae88-5440-410a-8af7-1edfd336449f-logs\") pod \"barbican-api-78f69789f4-g5k2m\" (UID: \"337dae88-5440-410a-8af7-1edfd336449f\") " pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.517612 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/337dae88-5440-410a-8af7-1edfd336449f-public-tls-certs\") pod \"barbican-api-78f69789f4-g5k2m\" (UID: \"337dae88-5440-410a-8af7-1edfd336449f\") " pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.521052 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/337dae88-5440-410a-8af7-1edfd336449f-internal-tls-certs\") pod \"barbican-api-78f69789f4-g5k2m\" (UID: \"337dae88-5440-410a-8af7-1edfd336449f\") " pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.525339 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/337dae88-5440-410a-8af7-1edfd336449f-combined-ca-bundle\") pod \"barbican-api-78f69789f4-g5k2m\" (UID: \"337dae88-5440-410a-8af7-1edfd336449f\") " pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.536229 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn558\" (UniqueName: \"kubernetes.io/projected/337dae88-5440-410a-8af7-1edfd336449f-kube-api-access-qn558\") pod \"barbican-api-78f69789f4-g5k2m\" (UID: \"337dae88-5440-410a-8af7-1edfd336449f\") " pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.546866 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/337dae88-5440-410a-8af7-1edfd336449f-config-data\") pod \"barbican-api-78f69789f4-g5k2m\" (UID: \"337dae88-5440-410a-8af7-1edfd336449f\") " pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.559401 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/337dae88-5440-410a-8af7-1edfd336449f-config-data-custom\") pod \"barbican-api-78f69789f4-g5k2m\" (UID: \"337dae88-5440-410a-8af7-1edfd336449f\") " pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.718822 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:33 crc kubenswrapper[4955]: I0202 13:19:33.728301 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa13bc5c-688d-4dfc-a1b6-da5214be9266" path="/var/lib/kubelet/pods/aa13bc5c-688d-4dfc-a1b6-da5214be9266/volumes" Feb 02 13:19:34 crc kubenswrapper[4955]: I0202 13:19:34.301108 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78f69789f4-g5k2m"] Feb 02 13:19:34 crc kubenswrapper[4955]: I0202 13:19:34.414661 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f69789f4-g5k2m" event={"ID":"337dae88-5440-410a-8af7-1edfd336449f","Type":"ContainerStarted","Data":"d92aaa2f621d700ed5cdc3f74985179d97753cf0616480f461b3422a55291724"} Feb 02 13:19:34 crc kubenswrapper[4955]: I0202 13:19:34.418831 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f5162a-d775-4658-b6a3-10e528720bcf","Type":"ContainerStarted","Data":"ba955a1496658639e275b8b6d9b51ed88e6f0e8864fc5de8675f0fd6180d98eb"} Feb 02 13:19:34 crc kubenswrapper[4955]: I0202 13:19:34.420106 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 13:19:34 crc kubenswrapper[4955]: I0202 13:19:34.422360 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69d89f97-f871-47b8-ac3f-e0f2e4a8242a","Type":"ContainerStarted","Data":"8a312a1877f109f7cbb8bc869b1ba843bfc270030e851b6c6d4fcaf83a4a902d"} Feb 02 13:19:34 crc kubenswrapper[4955]: I0202 13:19:34.458478 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.248144897 podStartE2EDuration="8.458460201s" podCreationTimestamp="2026-02-02 13:19:26 +0000 UTC" firstStartedPulling="2026-02-02 13:19:27.408051261 +0000 UTC m=+1018.320387711" lastFinishedPulling="2026-02-02 13:19:33.618366565 +0000 UTC m=+1024.530703015" observedRunningTime="2026-02-02 13:19:34.452238481 +0000 UTC m=+1025.364574951" watchObservedRunningTime="2026-02-02 13:19:34.458460201 +0000 UTC m=+1025.370796651" Feb 02 13:19:35 crc kubenswrapper[4955]: I0202 13:19:35.279040 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5556dd9bdb-vvgs6" podUID="dbd94f38-a41d-4069-976f-fb347698edd6" containerName="barbican-api-log" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:19:35 crc kubenswrapper[4955]: I0202 13:19:35.432404 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f69789f4-g5k2m" event={"ID":"337dae88-5440-410a-8af7-1edfd336449f","Type":"ContainerStarted","Data":"01b88858786ccb735ac1a43115743588119e806643b9d3222617d2f9afe1b8bf"} Feb 02 13:19:35 crc kubenswrapper[4955]: I0202 13:19:35.432450 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f69789f4-g5k2m" event={"ID":"337dae88-5440-410a-8af7-1edfd336449f","Type":"ContainerStarted","Data":"2812041bc62ba0fa533ad0edcdf7032adceba3ef182ee381464bf4b9b05ca7ca"} Feb 02 13:19:35 crc kubenswrapper[4955]: I0202 13:19:35.432738 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:35 crc kubenswrapper[4955]: I0202 13:19:35.432879 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:35 crc kubenswrapper[4955]: I0202 13:19:35.434569 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69d89f97-f871-47b8-ac3f-e0f2e4a8242a","Type":"ContainerStarted","Data":"dbb611765197058b2b6876ba680e1d5cb415d899496cb1d23f29f1827aa0cda2"} Feb 02 13:19:35 crc kubenswrapper[4955]: I0202 13:19:35.434758 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 13:19:35 crc kubenswrapper[4955]: I0202 13:19:35.453819 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-78f69789f4-g5k2m" podStartSLOduration=2.45380219 podStartE2EDuration="2.45380219s" podCreationTimestamp="2026-02-02 13:19:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:19:35.450009069 +0000 UTC m=+1026.362345519" watchObservedRunningTime="2026-02-02 13:19:35.45380219 +0000 UTC m=+1026.366138640" Feb 02 13:19:35 crc kubenswrapper[4955]: I0202 13:19:35.483942 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.4839184469999998 podStartE2EDuration="3.483918447s" podCreationTimestamp="2026-02-02 13:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:19:35.471783084 +0000 UTC m=+1026.384119564" watchObservedRunningTime="2026-02-02 13:19:35.483918447 +0000 UTC m=+1026.396254907" Feb 02 13:19:37 crc kubenswrapper[4955]: I0202 13:19:37.321630 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:19:37 crc kubenswrapper[4955]: I0202 13:19:37.419307 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-fbrpk"] Feb 02 13:19:37 crc kubenswrapper[4955]: I0202 13:19:37.419540 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" podUID="3480584e-cd7c-4621-a6b0-a64b6a6611ce" containerName="dnsmasq-dns" containerID="cri-o://35200518d3cd3e3207558386602b3bb730d62fc0f0aa0bb0acaffe36a98cbb91" gracePeriod=10 Feb 02 13:19:37 crc kubenswrapper[4955]: I0202 13:19:37.505058 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 13:19:37 crc kubenswrapper[4955]: I0202 13:19:37.592247 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 13:19:37 crc kubenswrapper[4955]: I0202 13:19:37.694898 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-584b87959d-4rkvv" Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.157998 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.221147 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg9l6\" (UniqueName: \"kubernetes.io/projected/3480584e-cd7c-4621-a6b0-a64b6a6611ce-kube-api-access-tg9l6\") pod \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.221257 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-ovsdbserver-nb\") pod \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.221291 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-ovsdbserver-sb\") pod \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.221324 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-dns-svc\") pod \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.221410 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-config\") pod \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.221437 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-dns-swift-storage-0\") pod \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\" (UID: \"3480584e-cd7c-4621-a6b0-a64b6a6611ce\") " Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.231952 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3480584e-cd7c-4621-a6b0-a64b6a6611ce-kube-api-access-tg9l6" (OuterVolumeSpecName: "kube-api-access-tg9l6") pod "3480584e-cd7c-4621-a6b0-a64b6a6611ce" (UID: "3480584e-cd7c-4621-a6b0-a64b6a6611ce"). InnerVolumeSpecName "kube-api-access-tg9l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.325740 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg9l6\" (UniqueName: \"kubernetes.io/projected/3480584e-cd7c-4621-a6b0-a64b6a6611ce-kube-api-access-tg9l6\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.343310 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3480584e-cd7c-4621-a6b0-a64b6a6611ce" (UID: "3480584e-cd7c-4621-a6b0-a64b6a6611ce"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.359347 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3480584e-cd7c-4621-a6b0-a64b6a6611ce" (UID: "3480584e-cd7c-4621-a6b0-a64b6a6611ce"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.364532 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-config" (OuterVolumeSpecName: "config") pod "3480584e-cd7c-4621-a6b0-a64b6a6611ce" (UID: "3480584e-cd7c-4621-a6b0-a64b6a6611ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.392656 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3480584e-cd7c-4621-a6b0-a64b6a6611ce" (UID: "3480584e-cd7c-4621-a6b0-a64b6a6611ce"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.404022 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3480584e-cd7c-4621-a6b0-a64b6a6611ce" (UID: "3480584e-cd7c-4621-a6b0-a64b6a6611ce"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.429986 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.430025 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.430044 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.430054 4955 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.430065 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3480584e-cd7c-4621-a6b0-a64b6a6611ce-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.488610 4955 generic.go:334] "Generic (PLEG): container finished" podID="3480584e-cd7c-4621-a6b0-a64b6a6611ce" containerID="35200518d3cd3e3207558386602b3bb730d62fc0f0aa0bb0acaffe36a98cbb91" exitCode=0 Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.488672 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" event={"ID":"3480584e-cd7c-4621-a6b0-a64b6a6611ce","Type":"ContainerDied","Data":"35200518d3cd3e3207558386602b3bb730d62fc0f0aa0bb0acaffe36a98cbb91"} Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.488726 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" event={"ID":"3480584e-cd7c-4621-a6b0-a64b6a6611ce","Type":"ContainerDied","Data":"24269510a64585fa60103a313b0f1bc3eece2433bff6d3445149cdf4e4a545cc"} Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.488746 4955 scope.go:117] "RemoveContainer" containerID="35200518d3cd3e3207558386602b3bb730d62fc0f0aa0bb0acaffe36a98cbb91" Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.488741 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-fbrpk" Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.488850 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6cd37a89-313f-4873-8ad6-a601101d75d8" containerName="cinder-scheduler" containerID="cri-o://0d08235f8fd3af8c5c63681f9af382bb45c4dc806fded7b6b0b43e422bcef8aa" gracePeriod=30 Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.488950 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6cd37a89-313f-4873-8ad6-a601101d75d8" containerName="probe" containerID="cri-o://2fb864a4da64ccb166c922abd4baf178ccfbd3f2e554447dfb2893b7ca28a19d" gracePeriod=30 Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.535579 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-fbrpk"] Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.537520 4955 scope.go:117] "RemoveContainer" containerID="916530111b3eea531111d0824f384606903328cdf1984016ffb9351b449d0b08" Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.547704 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-fbrpk"] Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.607710 4955 scope.go:117] "RemoveContainer" containerID="35200518d3cd3e3207558386602b3bb730d62fc0f0aa0bb0acaffe36a98cbb91" Feb 02 13:19:38 crc kubenswrapper[4955]: E0202 13:19:38.608312 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35200518d3cd3e3207558386602b3bb730d62fc0f0aa0bb0acaffe36a98cbb91\": container with ID starting with 35200518d3cd3e3207558386602b3bb730d62fc0f0aa0bb0acaffe36a98cbb91 not found: ID does not exist" containerID="35200518d3cd3e3207558386602b3bb730d62fc0f0aa0bb0acaffe36a98cbb91" Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.608340 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35200518d3cd3e3207558386602b3bb730d62fc0f0aa0bb0acaffe36a98cbb91"} err="failed to get container status \"35200518d3cd3e3207558386602b3bb730d62fc0f0aa0bb0acaffe36a98cbb91\": rpc error: code = NotFound desc = could not find container \"35200518d3cd3e3207558386602b3bb730d62fc0f0aa0bb0acaffe36a98cbb91\": container with ID starting with 35200518d3cd3e3207558386602b3bb730d62fc0f0aa0bb0acaffe36a98cbb91 not found: ID does not exist" Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.608359 4955 scope.go:117] "RemoveContainer" containerID="916530111b3eea531111d0824f384606903328cdf1984016ffb9351b449d0b08" Feb 02 13:19:38 crc kubenswrapper[4955]: E0202 13:19:38.610104 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"916530111b3eea531111d0824f384606903328cdf1984016ffb9351b449d0b08\": container with ID starting with 916530111b3eea531111d0824f384606903328cdf1984016ffb9351b449d0b08 not found: ID does not exist" containerID="916530111b3eea531111d0824f384606903328cdf1984016ffb9351b449d0b08" Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.610186 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"916530111b3eea531111d0824f384606903328cdf1984016ffb9351b449d0b08"} err="failed to get container status \"916530111b3eea531111d0824f384606903328cdf1984016ffb9351b449d0b08\": rpc error: code = NotFound desc = could not find container \"916530111b3eea531111d0824f384606903328cdf1984016ffb9351b449d0b08\": container with ID starting with 916530111b3eea531111d0824f384606903328cdf1984016ffb9351b449d0b08 not found: ID does not exist" Feb 02 13:19:38 crc kubenswrapper[4955]: I0202 13:19:38.998392 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:39 crc kubenswrapper[4955]: I0202 13:19:39.024937 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:39 crc kubenswrapper[4955]: I0202 13:19:39.360152 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:39 crc kubenswrapper[4955]: I0202 13:19:39.362003 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-cd4978596-vxlhp" Feb 02 13:19:39 crc kubenswrapper[4955]: I0202 13:19:39.480541 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-66b697b848-h98fb"] Feb 02 13:19:39 crc kubenswrapper[4955]: I0202 13:19:39.504780 4955 generic.go:334] "Generic (PLEG): container finished" podID="6cd37a89-313f-4873-8ad6-a601101d75d8" containerID="2fb864a4da64ccb166c922abd4baf178ccfbd3f2e554447dfb2893b7ca28a19d" exitCode=0 Feb 02 13:19:39 crc kubenswrapper[4955]: I0202 13:19:39.504844 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6cd37a89-313f-4873-8ad6-a601101d75d8","Type":"ContainerDied","Data":"2fb864a4da64ccb166c922abd4baf178ccfbd3f2e554447dfb2893b7ca28a19d"} Feb 02 13:19:39 crc kubenswrapper[4955]: I0202 13:19:39.627424 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-79d6d6d4df-7xvpd" Feb 02 13:19:39 crc kubenswrapper[4955]: I0202 13:19:39.786932 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3480584e-cd7c-4621-a6b0-a64b6a6611ce" path="/var/lib/kubelet/pods/3480584e-cd7c-4621-a6b0-a64b6a6611ce/volumes" Feb 02 13:19:40 crc kubenswrapper[4955]: I0202 13:19:40.021620 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5556dd9bdb-vvgs6" Feb 02 13:19:40 crc kubenswrapper[4955]: I0202 13:19:40.061055 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-74cbd57d57-fclbt" Feb 02 13:19:40 crc kubenswrapper[4955]: I0202 13:19:40.141722 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-584b87959d-4rkvv"] Feb 02 13:19:40 crc kubenswrapper[4955]: I0202 13:19:40.142057 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-584b87959d-4rkvv" podUID="1dee501b-e122-4870-b3bb-4096d3dcc975" containerName="neutron-api" containerID="cri-o://6fb377675d21b30988b781602e0ff194c6d3a5ec8482f3ed3372259fb696f250" gracePeriod=30 Feb 02 13:19:40 crc kubenswrapper[4955]: I0202 13:19:40.142708 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-584b87959d-4rkvv" podUID="1dee501b-e122-4870-b3bb-4096d3dcc975" containerName="neutron-httpd" containerID="cri-o://5c218e0830897ee24d6a272a3af19e0b501794b096ae5959823a6705204402d8" gracePeriod=30 Feb 02 13:19:40 crc kubenswrapper[4955]: I0202 13:19:40.539801 4955 generic.go:334] "Generic (PLEG): container finished" podID="1dee501b-e122-4870-b3bb-4096d3dcc975" containerID="5c218e0830897ee24d6a272a3af19e0b501794b096ae5959823a6705204402d8" exitCode=0 Feb 02 13:19:40 crc kubenswrapper[4955]: I0202 13:19:40.540182 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-584b87959d-4rkvv" event={"ID":"1dee501b-e122-4870-b3bb-4096d3dcc975","Type":"ContainerDied","Data":"5c218e0830897ee24d6a272a3af19e0b501794b096ae5959823a6705204402d8"} Feb 02 13:19:40 crc kubenswrapper[4955]: I0202 13:19:40.540254 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-66b697b848-h98fb" podUID="88662989-310c-4dc2-9f6e-26c35fcf8da3" containerName="placement-log" containerID="cri-o://b2f9615f5d8cd15a679a259d911b5d2767d61c7dbbb527b3b4eb67a3fd46f3ca" gracePeriod=30 Feb 02 13:19:40 crc kubenswrapper[4955]: I0202 13:19:40.540346 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-66b697b848-h98fb" podUID="88662989-310c-4dc2-9f6e-26c35fcf8da3" containerName="placement-api" containerID="cri-o://c3613eca5a9f9c282a5cf5acd5a38d90260113c20ac5d7f04d136af842bbba44" gracePeriod=30 Feb 02 13:19:40 crc kubenswrapper[4955]: I0202 13:19:40.561515 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5556dd9bdb-vvgs6" Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.222707 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 02 13:19:41 crc kubenswrapper[4955]: E0202 13:19:41.223075 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3480584e-cd7c-4621-a6b0-a64b6a6611ce" containerName="init" Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.223091 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="3480584e-cd7c-4621-a6b0-a64b6a6611ce" containerName="init" Feb 02 13:19:41 crc kubenswrapper[4955]: E0202 13:19:41.223108 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3480584e-cd7c-4621-a6b0-a64b6a6611ce" containerName="dnsmasq-dns" Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.223116 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="3480584e-cd7c-4621-a6b0-a64b6a6611ce" containerName="dnsmasq-dns" Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.223303 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="3480584e-cd7c-4621-a6b0-a64b6a6611ce" containerName="dnsmasq-dns" Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.223860 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.226014 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-xv4nz" Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.226647 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.226834 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.245650 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.348805 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/19fa3e77-422a-425a-8e53-cefd5d880462-openstack-config\") pod \"openstackclient\" (UID: \"19fa3e77-422a-425a-8e53-cefd5d880462\") " pod="openstack/openstackclient" Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.348993 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19fa3e77-422a-425a-8e53-cefd5d880462-combined-ca-bundle\") pod \"openstackclient\" (UID: \"19fa3e77-422a-425a-8e53-cefd5d880462\") " pod="openstack/openstackclient" Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.349051 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6dxw\" (UniqueName: \"kubernetes.io/projected/19fa3e77-422a-425a-8e53-cefd5d880462-kube-api-access-d6dxw\") pod \"openstackclient\" (UID: \"19fa3e77-422a-425a-8e53-cefd5d880462\") " pod="openstack/openstackclient" Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.349094 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/19fa3e77-422a-425a-8e53-cefd5d880462-openstack-config-secret\") pod \"openstackclient\" (UID: \"19fa3e77-422a-425a-8e53-cefd5d880462\") " pod="openstack/openstackclient" Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.450445 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/19fa3e77-422a-425a-8e53-cefd5d880462-openstack-config\") pod \"openstackclient\" (UID: \"19fa3e77-422a-425a-8e53-cefd5d880462\") " pod="openstack/openstackclient" Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.450670 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19fa3e77-422a-425a-8e53-cefd5d880462-combined-ca-bundle\") pod \"openstackclient\" (UID: \"19fa3e77-422a-425a-8e53-cefd5d880462\") " pod="openstack/openstackclient" Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.450714 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6dxw\" (UniqueName: \"kubernetes.io/projected/19fa3e77-422a-425a-8e53-cefd5d880462-kube-api-access-d6dxw\") pod \"openstackclient\" (UID: \"19fa3e77-422a-425a-8e53-cefd5d880462\") " pod="openstack/openstackclient" Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.450742 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/19fa3e77-422a-425a-8e53-cefd5d880462-openstack-config-secret\") pod \"openstackclient\" (UID: \"19fa3e77-422a-425a-8e53-cefd5d880462\") " pod="openstack/openstackclient" Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.451528 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/19fa3e77-422a-425a-8e53-cefd5d880462-openstack-config\") pod \"openstackclient\" (UID: \"19fa3e77-422a-425a-8e53-cefd5d880462\") " pod="openstack/openstackclient" Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.456708 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19fa3e77-422a-425a-8e53-cefd5d880462-combined-ca-bundle\") pod \"openstackclient\" (UID: \"19fa3e77-422a-425a-8e53-cefd5d880462\") " pod="openstack/openstackclient" Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.459141 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/19fa3e77-422a-425a-8e53-cefd5d880462-openstack-config-secret\") pod \"openstackclient\" (UID: \"19fa3e77-422a-425a-8e53-cefd5d880462\") " pod="openstack/openstackclient" Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.476240 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6dxw\" (UniqueName: \"kubernetes.io/projected/19fa3e77-422a-425a-8e53-cefd5d880462-kube-api-access-d6dxw\") pod \"openstackclient\" (UID: \"19fa3e77-422a-425a-8e53-cefd5d880462\") " pod="openstack/openstackclient" Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.541263 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.563623 4955 generic.go:334] "Generic (PLEG): container finished" podID="88662989-310c-4dc2-9f6e-26c35fcf8da3" containerID="b2f9615f5d8cd15a679a259d911b5d2767d61c7dbbb527b3b4eb67a3fd46f3ca" exitCode=143 Feb 02 13:19:41 crc kubenswrapper[4955]: I0202 13:19:41.563688 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66b697b848-h98fb" event={"ID":"88662989-310c-4dc2-9f6e-26c35fcf8da3","Type":"ContainerDied","Data":"b2f9615f5d8cd15a679a259d911b5d2767d61c7dbbb527b3b4eb67a3fd46f3ca"} Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.150756 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 13:19:42 crc kubenswrapper[4955]: W0202 13:19:42.163754 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19fa3e77_422a_425a_8e53_cefd5d880462.slice/crio-f777e7b3a38cc2e1fa4a8dcecabb4eb835c67455a88ea9a1c27fdc3085ab0523 WatchSource:0}: Error finding container f777e7b3a38cc2e1fa4a8dcecabb4eb835c67455a88ea9a1c27fdc3085ab0523: Status 404 returned error can't find the container with id f777e7b3a38cc2e1fa4a8dcecabb4eb835c67455a88ea9a1c27fdc3085ab0523 Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.280269 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.470002 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-config-data-custom\") pod \"6cd37a89-313f-4873-8ad6-a601101d75d8\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.471094 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-config-data\") pod \"6cd37a89-313f-4873-8ad6-a601101d75d8\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.471160 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-scripts\") pod \"6cd37a89-313f-4873-8ad6-a601101d75d8\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.471244 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsntk\" (UniqueName: \"kubernetes.io/projected/6cd37a89-313f-4873-8ad6-a601101d75d8-kube-api-access-bsntk\") pod \"6cd37a89-313f-4873-8ad6-a601101d75d8\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.471280 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-combined-ca-bundle\") pod \"6cd37a89-313f-4873-8ad6-a601101d75d8\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.471314 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6cd37a89-313f-4873-8ad6-a601101d75d8-etc-machine-id\") pod \"6cd37a89-313f-4873-8ad6-a601101d75d8\" (UID: \"6cd37a89-313f-4873-8ad6-a601101d75d8\") " Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.471453 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cd37a89-313f-4873-8ad6-a601101d75d8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6cd37a89-313f-4873-8ad6-a601101d75d8" (UID: "6cd37a89-313f-4873-8ad6-a601101d75d8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.471846 4955 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6cd37a89-313f-4873-8ad6-a601101d75d8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.477910 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6cd37a89-313f-4873-8ad6-a601101d75d8" (UID: "6cd37a89-313f-4873-8ad6-a601101d75d8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.479078 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-scripts" (OuterVolumeSpecName: "scripts") pod "6cd37a89-313f-4873-8ad6-a601101d75d8" (UID: "6cd37a89-313f-4873-8ad6-a601101d75d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.479122 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cd37a89-313f-4873-8ad6-a601101d75d8-kube-api-access-bsntk" (OuterVolumeSpecName: "kube-api-access-bsntk") pod "6cd37a89-313f-4873-8ad6-a601101d75d8" (UID: "6cd37a89-313f-4873-8ad6-a601101d75d8"). InnerVolumeSpecName "kube-api-access-bsntk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.536645 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6cd37a89-313f-4873-8ad6-a601101d75d8" (UID: "6cd37a89-313f-4873-8ad6-a601101d75d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.574870 4955 generic.go:334] "Generic (PLEG): container finished" podID="6cd37a89-313f-4873-8ad6-a601101d75d8" containerID="0d08235f8fd3af8c5c63681f9af382bb45c4dc806fded7b6b0b43e422bcef8aa" exitCode=0 Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.574984 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.575027 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6cd37a89-313f-4873-8ad6-a601101d75d8","Type":"ContainerDied","Data":"0d08235f8fd3af8c5c63681f9af382bb45c4dc806fded7b6b0b43e422bcef8aa"} Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.575068 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6cd37a89-313f-4873-8ad6-a601101d75d8","Type":"ContainerDied","Data":"9d037e9225ae894ae78d9d629d09927da66576a60415720dc022303b02126f42"} Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.575092 4955 scope.go:117] "RemoveContainer" containerID="2fb864a4da64ccb166c922abd4baf178ccfbd3f2e554447dfb2893b7ca28a19d" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.577265 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.577641 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsntk\" (UniqueName: \"kubernetes.io/projected/6cd37a89-313f-4873-8ad6-a601101d75d8-kube-api-access-bsntk\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.577659 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.577672 4955 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.578291 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-config-data" (OuterVolumeSpecName: "config-data") pod "6cd37a89-313f-4873-8ad6-a601101d75d8" (UID: "6cd37a89-313f-4873-8ad6-a601101d75d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.585694 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"19fa3e77-422a-425a-8e53-cefd5d880462","Type":"ContainerStarted","Data":"f777e7b3a38cc2e1fa4a8dcecabb4eb835c67455a88ea9a1c27fdc3085ab0523"} Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.647892 4955 scope.go:117] "RemoveContainer" containerID="0d08235f8fd3af8c5c63681f9af382bb45c4dc806fded7b6b0b43e422bcef8aa" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.668289 4955 scope.go:117] "RemoveContainer" containerID="2fb864a4da64ccb166c922abd4baf178ccfbd3f2e554447dfb2893b7ca28a19d" Feb 02 13:19:42 crc kubenswrapper[4955]: E0202 13:19:42.668818 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fb864a4da64ccb166c922abd4baf178ccfbd3f2e554447dfb2893b7ca28a19d\": container with ID starting with 2fb864a4da64ccb166c922abd4baf178ccfbd3f2e554447dfb2893b7ca28a19d not found: ID does not exist" containerID="2fb864a4da64ccb166c922abd4baf178ccfbd3f2e554447dfb2893b7ca28a19d" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.668877 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fb864a4da64ccb166c922abd4baf178ccfbd3f2e554447dfb2893b7ca28a19d"} err="failed to get container status \"2fb864a4da64ccb166c922abd4baf178ccfbd3f2e554447dfb2893b7ca28a19d\": rpc error: code = NotFound desc = could not find container \"2fb864a4da64ccb166c922abd4baf178ccfbd3f2e554447dfb2893b7ca28a19d\": container with ID starting with 2fb864a4da64ccb166c922abd4baf178ccfbd3f2e554447dfb2893b7ca28a19d not found: ID does not exist" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.668911 4955 scope.go:117] "RemoveContainer" containerID="0d08235f8fd3af8c5c63681f9af382bb45c4dc806fded7b6b0b43e422bcef8aa" Feb 02 13:19:42 crc kubenswrapper[4955]: E0202 13:19:42.669452 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d08235f8fd3af8c5c63681f9af382bb45c4dc806fded7b6b0b43e422bcef8aa\": container with ID starting with 0d08235f8fd3af8c5c63681f9af382bb45c4dc806fded7b6b0b43e422bcef8aa not found: ID does not exist" containerID="0d08235f8fd3af8c5c63681f9af382bb45c4dc806fded7b6b0b43e422bcef8aa" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.669530 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d08235f8fd3af8c5c63681f9af382bb45c4dc806fded7b6b0b43e422bcef8aa"} err="failed to get container status \"0d08235f8fd3af8c5c63681f9af382bb45c4dc806fded7b6b0b43e422bcef8aa\": rpc error: code = NotFound desc = could not find container \"0d08235f8fd3af8c5c63681f9af382bb45c4dc806fded7b6b0b43e422bcef8aa\": container with ID starting with 0d08235f8fd3af8c5c63681f9af382bb45c4dc806fded7b6b0b43e422bcef8aa not found: ID does not exist" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.680361 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cd37a89-313f-4873-8ad6-a601101d75d8-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.944707 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.971348 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.983269 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 13:19:42 crc kubenswrapper[4955]: E0202 13:19:42.984041 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd37a89-313f-4873-8ad6-a601101d75d8" containerName="cinder-scheduler" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.984075 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd37a89-313f-4873-8ad6-a601101d75d8" containerName="cinder-scheduler" Feb 02 13:19:42 crc kubenswrapper[4955]: E0202 13:19:42.984132 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd37a89-313f-4873-8ad6-a601101d75d8" containerName="probe" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.984145 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd37a89-313f-4873-8ad6-a601101d75d8" containerName="probe" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.984420 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cd37a89-313f-4873-8ad6-a601101d75d8" containerName="cinder-scheduler" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.984469 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cd37a89-313f-4873-8ad6-a601101d75d8" containerName="probe" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.985997 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.992393 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 13:19:42 crc kubenswrapper[4955]: I0202 13:19:42.992401 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 13:19:43 crc kubenswrapper[4955]: I0202 13:19:43.095987 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12de4219-046b-4e44-bfa5-ec028fad8812-config-data\") pod \"cinder-scheduler-0\" (UID: \"12de4219-046b-4e44-bfa5-ec028fad8812\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:43 crc kubenswrapper[4955]: I0202 13:19:43.096359 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12de4219-046b-4e44-bfa5-ec028fad8812-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"12de4219-046b-4e44-bfa5-ec028fad8812\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:43 crc kubenswrapper[4955]: I0202 13:19:43.096472 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12de4219-046b-4e44-bfa5-ec028fad8812-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"12de4219-046b-4e44-bfa5-ec028fad8812\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:43 crc kubenswrapper[4955]: I0202 13:19:43.096562 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/12de4219-046b-4e44-bfa5-ec028fad8812-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"12de4219-046b-4e44-bfa5-ec028fad8812\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:43 crc kubenswrapper[4955]: I0202 13:19:43.096674 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12de4219-046b-4e44-bfa5-ec028fad8812-scripts\") pod \"cinder-scheduler-0\" (UID: \"12de4219-046b-4e44-bfa5-ec028fad8812\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:43 crc kubenswrapper[4955]: I0202 13:19:43.096755 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsnz2\" (UniqueName: \"kubernetes.io/projected/12de4219-046b-4e44-bfa5-ec028fad8812-kube-api-access-xsnz2\") pod \"cinder-scheduler-0\" (UID: \"12de4219-046b-4e44-bfa5-ec028fad8812\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:43 crc kubenswrapper[4955]: I0202 13:19:43.198904 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12de4219-046b-4e44-bfa5-ec028fad8812-config-data\") pod \"cinder-scheduler-0\" (UID: \"12de4219-046b-4e44-bfa5-ec028fad8812\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:43 crc kubenswrapper[4955]: I0202 13:19:43.198968 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12de4219-046b-4e44-bfa5-ec028fad8812-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"12de4219-046b-4e44-bfa5-ec028fad8812\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:43 crc kubenswrapper[4955]: I0202 13:19:43.199023 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12de4219-046b-4e44-bfa5-ec028fad8812-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"12de4219-046b-4e44-bfa5-ec028fad8812\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:43 crc kubenswrapper[4955]: I0202 13:19:43.199050 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/12de4219-046b-4e44-bfa5-ec028fad8812-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"12de4219-046b-4e44-bfa5-ec028fad8812\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:43 crc kubenswrapper[4955]: I0202 13:19:43.199097 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12de4219-046b-4e44-bfa5-ec028fad8812-scripts\") pod \"cinder-scheduler-0\" (UID: \"12de4219-046b-4e44-bfa5-ec028fad8812\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:43 crc kubenswrapper[4955]: I0202 13:19:43.199114 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsnz2\" (UniqueName: \"kubernetes.io/projected/12de4219-046b-4e44-bfa5-ec028fad8812-kube-api-access-xsnz2\") pod \"cinder-scheduler-0\" (UID: \"12de4219-046b-4e44-bfa5-ec028fad8812\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:43 crc kubenswrapper[4955]: I0202 13:19:43.199451 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/12de4219-046b-4e44-bfa5-ec028fad8812-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"12de4219-046b-4e44-bfa5-ec028fad8812\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:43 crc kubenswrapper[4955]: I0202 13:19:43.209678 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12de4219-046b-4e44-bfa5-ec028fad8812-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"12de4219-046b-4e44-bfa5-ec028fad8812\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:43 crc kubenswrapper[4955]: I0202 13:19:43.217104 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsnz2\" (UniqueName: \"kubernetes.io/projected/12de4219-046b-4e44-bfa5-ec028fad8812-kube-api-access-xsnz2\") pod \"cinder-scheduler-0\" (UID: \"12de4219-046b-4e44-bfa5-ec028fad8812\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:43 crc kubenswrapper[4955]: I0202 13:19:43.230154 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12de4219-046b-4e44-bfa5-ec028fad8812-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"12de4219-046b-4e44-bfa5-ec028fad8812\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:43 crc kubenswrapper[4955]: I0202 13:19:43.261201 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12de4219-046b-4e44-bfa5-ec028fad8812-config-data\") pod \"cinder-scheduler-0\" (UID: \"12de4219-046b-4e44-bfa5-ec028fad8812\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:43 crc kubenswrapper[4955]: I0202 13:19:43.273991 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12de4219-046b-4e44-bfa5-ec028fad8812-scripts\") pod \"cinder-scheduler-0\" (UID: \"12de4219-046b-4e44-bfa5-ec028fad8812\") " pod="openstack/cinder-scheduler-0" Feb 02 13:19:43 crc kubenswrapper[4955]: I0202 13:19:43.315408 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 13:19:43 crc kubenswrapper[4955]: I0202 13:19:43.729088 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cd37a89-313f-4873-8ad6-a601101d75d8" path="/var/lib/kubelet/pods/6cd37a89-313f-4873-8ad6-a601101d75d8/volumes" Feb 02 13:19:43 crc kubenswrapper[4955]: I0202 13:19:43.839234 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.193484 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.328318 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-public-tls-certs\") pod \"88662989-310c-4dc2-9f6e-26c35fcf8da3\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.333451 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md7bq\" (UniqueName: \"kubernetes.io/projected/88662989-310c-4dc2-9f6e-26c35fcf8da3-kube-api-access-md7bq\") pod \"88662989-310c-4dc2-9f6e-26c35fcf8da3\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.333493 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-config-data\") pod \"88662989-310c-4dc2-9f6e-26c35fcf8da3\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.333583 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88662989-310c-4dc2-9f6e-26c35fcf8da3-logs\") pod \"88662989-310c-4dc2-9f6e-26c35fcf8da3\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.333624 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-internal-tls-certs\") pod \"88662989-310c-4dc2-9f6e-26c35fcf8da3\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.333796 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-combined-ca-bundle\") pod \"88662989-310c-4dc2-9f6e-26c35fcf8da3\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.334125 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-scripts\") pod \"88662989-310c-4dc2-9f6e-26c35fcf8da3\" (UID: \"88662989-310c-4dc2-9f6e-26c35fcf8da3\") " Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.336121 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88662989-310c-4dc2-9f6e-26c35fcf8da3-logs" (OuterVolumeSpecName: "logs") pod "88662989-310c-4dc2-9f6e-26c35fcf8da3" (UID: "88662989-310c-4dc2-9f6e-26c35fcf8da3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.353920 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-scripts" (OuterVolumeSpecName: "scripts") pod "88662989-310c-4dc2-9f6e-26c35fcf8da3" (UID: "88662989-310c-4dc2-9f6e-26c35fcf8da3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.357000 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88662989-310c-4dc2-9f6e-26c35fcf8da3-kube-api-access-md7bq" (OuterVolumeSpecName: "kube-api-access-md7bq") pod "88662989-310c-4dc2-9f6e-26c35fcf8da3" (UID: "88662989-310c-4dc2-9f6e-26c35fcf8da3"). InnerVolumeSpecName "kube-api-access-md7bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.436836 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.436871 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md7bq\" (UniqueName: \"kubernetes.io/projected/88662989-310c-4dc2-9f6e-26c35fcf8da3-kube-api-access-md7bq\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.436885 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88662989-310c-4dc2-9f6e-26c35fcf8da3-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.477623 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-config-data" (OuterVolumeSpecName: "config-data") pod "88662989-310c-4dc2-9f6e-26c35fcf8da3" (UID: "88662989-310c-4dc2-9f6e-26c35fcf8da3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.481238 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88662989-310c-4dc2-9f6e-26c35fcf8da3" (UID: "88662989-310c-4dc2-9f6e-26c35fcf8da3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.512744 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "88662989-310c-4dc2-9f6e-26c35fcf8da3" (UID: "88662989-310c-4dc2-9f6e-26c35fcf8da3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.540647 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.540679 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.540687 4955 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.559291 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "88662989-310c-4dc2-9f6e-26c35fcf8da3" (UID: "88662989-310c-4dc2-9f6e-26c35fcf8da3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.635110 4955 generic.go:334] "Generic (PLEG): container finished" podID="88662989-310c-4dc2-9f6e-26c35fcf8da3" containerID="c3613eca5a9f9c282a5cf5acd5a38d90260113c20ac5d7f04d136af842bbba44" exitCode=0 Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.635170 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66b697b848-h98fb" event={"ID":"88662989-310c-4dc2-9f6e-26c35fcf8da3","Type":"ContainerDied","Data":"c3613eca5a9f9c282a5cf5acd5a38d90260113c20ac5d7f04d136af842bbba44"} Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.635197 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66b697b848-h98fb" event={"ID":"88662989-310c-4dc2-9f6e-26c35fcf8da3","Type":"ContainerDied","Data":"5f241f5b97653e84023079556f0819c56e085fa56bc0a805447a8db40d50e3b0"} Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.635622 4955 scope.go:117] "RemoveContainer" containerID="c3613eca5a9f9c282a5cf5acd5a38d90260113c20ac5d7f04d136af842bbba44" Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.636015 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66b697b848-h98fb" Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.643841 4955 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88662989-310c-4dc2-9f6e-26c35fcf8da3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.646601 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"12de4219-046b-4e44-bfa5-ec028fad8812","Type":"ContainerStarted","Data":"c6caa9acedf674d88582bf400a0d3ee41f645407089f18fb6b27e6a25c97d7bf"} Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.690093 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-66b697b848-h98fb"] Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.698949 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-66b697b848-h98fb"] Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.704789 4955 scope.go:117] "RemoveContainer" containerID="b2f9615f5d8cd15a679a259d911b5d2767d61c7dbbb527b3b4eb67a3fd46f3ca" Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.779044 4955 scope.go:117] "RemoveContainer" containerID="c3613eca5a9f9c282a5cf5acd5a38d90260113c20ac5d7f04d136af842bbba44" Feb 02 13:19:44 crc kubenswrapper[4955]: E0202 13:19:44.785629 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3613eca5a9f9c282a5cf5acd5a38d90260113c20ac5d7f04d136af842bbba44\": container with ID starting with c3613eca5a9f9c282a5cf5acd5a38d90260113c20ac5d7f04d136af842bbba44 not found: ID does not exist" containerID="c3613eca5a9f9c282a5cf5acd5a38d90260113c20ac5d7f04d136af842bbba44" Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.785771 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3613eca5a9f9c282a5cf5acd5a38d90260113c20ac5d7f04d136af842bbba44"} err="failed to get container status \"c3613eca5a9f9c282a5cf5acd5a38d90260113c20ac5d7f04d136af842bbba44\": rpc error: code = NotFound desc = could not find container \"c3613eca5a9f9c282a5cf5acd5a38d90260113c20ac5d7f04d136af842bbba44\": container with ID starting with c3613eca5a9f9c282a5cf5acd5a38d90260113c20ac5d7f04d136af842bbba44 not found: ID does not exist" Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.785884 4955 scope.go:117] "RemoveContainer" containerID="b2f9615f5d8cd15a679a259d911b5d2767d61c7dbbb527b3b4eb67a3fd46f3ca" Feb 02 13:19:44 crc kubenswrapper[4955]: E0202 13:19:44.786752 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2f9615f5d8cd15a679a259d911b5d2767d61c7dbbb527b3b4eb67a3fd46f3ca\": container with ID starting with b2f9615f5d8cd15a679a259d911b5d2767d61c7dbbb527b3b4eb67a3fd46f3ca not found: ID does not exist" containerID="b2f9615f5d8cd15a679a259d911b5d2767d61c7dbbb527b3b4eb67a3fd46f3ca" Feb 02 13:19:44 crc kubenswrapper[4955]: I0202 13:19:44.787869 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2f9615f5d8cd15a679a259d911b5d2767d61c7dbbb527b3b4eb67a3fd46f3ca"} err="failed to get container status \"b2f9615f5d8cd15a679a259d911b5d2767d61c7dbbb527b3b4eb67a3fd46f3ca\": rpc error: code = NotFound desc = could not find container \"b2f9615f5d8cd15a679a259d911b5d2767d61c7dbbb527b3b4eb67a3fd46f3ca\": container with ID starting with b2f9615f5d8cd15a679a259d911b5d2767d61c7dbbb527b3b4eb67a3fd46f3ca not found: ID does not exist" Feb 02 13:19:45 crc kubenswrapper[4955]: I0202 13:19:45.102507 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 02 13:19:45 crc kubenswrapper[4955]: I0202 13:19:45.468025 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:45 crc kubenswrapper[4955]: I0202 13:19:45.631323 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-78f69789f4-g5k2m" Feb 02 13:19:45 crc kubenswrapper[4955]: I0202 13:19:45.681185 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"12de4219-046b-4e44-bfa5-ec028fad8812","Type":"ContainerStarted","Data":"780857a82fbbf4a96de01c65f78187543bdd4158e8bd60434203e2442c328b93"} Feb 02 13:19:45 crc kubenswrapper[4955]: I0202 13:19:45.715137 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5556dd9bdb-vvgs6"] Feb 02 13:19:45 crc kubenswrapper[4955]: I0202 13:19:45.715415 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5556dd9bdb-vvgs6" podUID="dbd94f38-a41d-4069-976f-fb347698edd6" containerName="barbican-api-log" containerID="cri-o://9720d1c591b1dec49c82c853629bc29ea23ffa86cde4c173ef4428da55747d38" gracePeriod=30 Feb 02 13:19:45 crc kubenswrapper[4955]: I0202 13:19:45.715594 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5556dd9bdb-vvgs6" podUID="dbd94f38-a41d-4069-976f-fb347698edd6" containerName="barbican-api" containerID="cri-o://3dd62fe38f541578e6715632c5c28cffee451192a6e233d2e840bf1af503efa0" gracePeriod=30 Feb 02 13:19:45 crc kubenswrapper[4955]: I0202 13:19:45.758012 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88662989-310c-4dc2-9f6e-26c35fcf8da3" path="/var/lib/kubelet/pods/88662989-310c-4dc2-9f6e-26c35fcf8da3/volumes" Feb 02 13:19:46 crc kubenswrapper[4955]: I0202 13:19:46.692980 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"12de4219-046b-4e44-bfa5-ec028fad8812","Type":"ContainerStarted","Data":"b38b4d7ebe5cfc7b8dc703ffbdf3527dee0b8c87745ee6fa63bd06fbda3cf021"} Feb 02 13:19:46 crc kubenswrapper[4955]: I0202 13:19:46.697183 4955 generic.go:334] "Generic (PLEG): container finished" podID="dbd94f38-a41d-4069-976f-fb347698edd6" containerID="9720d1c591b1dec49c82c853629bc29ea23ffa86cde4c173ef4428da55747d38" exitCode=143 Feb 02 13:19:46 crc kubenswrapper[4955]: I0202 13:19:46.697246 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5556dd9bdb-vvgs6" event={"ID":"dbd94f38-a41d-4069-976f-fb347698edd6","Type":"ContainerDied","Data":"9720d1c591b1dec49c82c853629bc29ea23ffa86cde4c173ef4428da55747d38"} Feb 02 13:19:46 crc kubenswrapper[4955]: I0202 13:19:46.721935 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.721917529 podStartE2EDuration="4.721917529s" podCreationTimestamp="2026-02-02 13:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:19:46.713571037 +0000 UTC m=+1037.625907487" watchObservedRunningTime="2026-02-02 13:19:46.721917529 +0000 UTC m=+1037.634253979" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.315975 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.637167 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-79fb55657c-85sjk"] Feb 02 13:19:48 crc kubenswrapper[4955]: E0202 13:19:48.638372 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88662989-310c-4dc2-9f6e-26c35fcf8da3" containerName="placement-api" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.638414 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="88662989-310c-4dc2-9f6e-26c35fcf8da3" containerName="placement-api" Feb 02 13:19:48 crc kubenswrapper[4955]: E0202 13:19:48.638459 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88662989-310c-4dc2-9f6e-26c35fcf8da3" containerName="placement-log" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.638469 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="88662989-310c-4dc2-9f6e-26c35fcf8da3" containerName="placement-log" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.638745 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="88662989-310c-4dc2-9f6e-26c35fcf8da3" containerName="placement-log" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.638769 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="88662989-310c-4dc2-9f6e-26c35fcf8da3" containerName="placement-api" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.640175 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.642689 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.642861 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.644820 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.648210 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-79fb55657c-85sjk"] Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.724632 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91116c53-5321-4170-9ec0-1c0588b81355-etc-swift\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.724680 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91116c53-5321-4170-9ec0-1c0588b81355-combined-ca-bundle\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.724751 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91116c53-5321-4170-9ec0-1c0588b81355-internal-tls-certs\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.724772 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpqd6\" (UniqueName: \"kubernetes.io/projected/91116c53-5321-4170-9ec0-1c0588b81355-kube-api-access-lpqd6\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.724838 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91116c53-5321-4170-9ec0-1c0588b81355-public-tls-certs\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.724887 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91116c53-5321-4170-9ec0-1c0588b81355-config-data\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.724937 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91116c53-5321-4170-9ec0-1c0588b81355-log-httpd\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.724988 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91116c53-5321-4170-9ec0-1c0588b81355-run-httpd\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.826950 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91116c53-5321-4170-9ec0-1c0588b81355-internal-tls-certs\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.826998 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpqd6\" (UniqueName: \"kubernetes.io/projected/91116c53-5321-4170-9ec0-1c0588b81355-kube-api-access-lpqd6\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.827081 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91116c53-5321-4170-9ec0-1c0588b81355-public-tls-certs\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.827133 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91116c53-5321-4170-9ec0-1c0588b81355-config-data\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.827191 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91116c53-5321-4170-9ec0-1c0588b81355-log-httpd\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.827260 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91116c53-5321-4170-9ec0-1c0588b81355-run-httpd\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.827291 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91116c53-5321-4170-9ec0-1c0588b81355-etc-swift\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.827324 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91116c53-5321-4170-9ec0-1c0588b81355-combined-ca-bundle\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.828204 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91116c53-5321-4170-9ec0-1c0588b81355-log-httpd\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.828425 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91116c53-5321-4170-9ec0-1c0588b81355-run-httpd\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.837600 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91116c53-5321-4170-9ec0-1c0588b81355-etc-swift\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.839419 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91116c53-5321-4170-9ec0-1c0588b81355-config-data\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.841417 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91116c53-5321-4170-9ec0-1c0588b81355-internal-tls-certs\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.851433 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91116c53-5321-4170-9ec0-1c0588b81355-combined-ca-bundle\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.858191 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpqd6\" (UniqueName: \"kubernetes.io/projected/91116c53-5321-4170-9ec0-1c0588b81355-kube-api-access-lpqd6\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.863380 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91116c53-5321-4170-9ec0-1c0588b81355-public-tls-certs\") pod \"swift-proxy-79fb55657c-85sjk\" (UID: \"91116c53-5321-4170-9ec0-1c0588b81355\") " pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.895206 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5556dd9bdb-vvgs6" podUID="dbd94f38-a41d-4069-976f-fb347698edd6" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:55576->10.217.0.164:9311: read: connection reset by peer" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.895229 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5556dd9bdb-vvgs6" podUID="dbd94f38-a41d-4069-976f-fb347698edd6" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:55588->10.217.0.164:9311: read: connection reset by peer" Feb 02 13:19:48 crc kubenswrapper[4955]: I0202 13:19:48.971132 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:49 crc kubenswrapper[4955]: I0202 13:19:49.730426 4955 generic.go:334] "Generic (PLEG): container finished" podID="dbd94f38-a41d-4069-976f-fb347698edd6" containerID="3dd62fe38f541578e6715632c5c28cffee451192a6e233d2e840bf1af503efa0" exitCode=0 Feb 02 13:19:49 crc kubenswrapper[4955]: I0202 13:19:49.730465 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5556dd9bdb-vvgs6" event={"ID":"dbd94f38-a41d-4069-976f-fb347698edd6","Type":"ContainerDied","Data":"3dd62fe38f541578e6715632c5c28cffee451192a6e233d2e840bf1af503efa0"} Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.449489 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6d8c5fddf-xssbs"] Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.452203 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6d8c5fddf-xssbs" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.465349 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-h6vsl" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.465397 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.466895 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.478690 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6d8c5fddf-xssbs"] Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.565596 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-qw2ns"] Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.575686 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.577217 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4433372-00c8-4e01-8813-4fed0ea54158-config-data\") pod \"heat-engine-6d8c5fddf-xssbs\" (UID: \"e4433372-00c8-4e01-8813-4fed0ea54158\") " pod="openstack/heat-engine-6d8c5fddf-xssbs" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.577348 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6w5r\" (UniqueName: \"kubernetes.io/projected/e4433372-00c8-4e01-8813-4fed0ea54158-kube-api-access-q6w5r\") pod \"heat-engine-6d8c5fddf-xssbs\" (UID: \"e4433372-00c8-4e01-8813-4fed0ea54158\") " pod="openstack/heat-engine-6d8c5fddf-xssbs" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.577419 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4433372-00c8-4e01-8813-4fed0ea54158-config-data-custom\") pod \"heat-engine-6d8c5fddf-xssbs\" (UID: \"e4433372-00c8-4e01-8813-4fed0ea54158\") " pod="openstack/heat-engine-6d8c5fddf-xssbs" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.577468 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4433372-00c8-4e01-8813-4fed0ea54158-combined-ca-bundle\") pod \"heat-engine-6d8c5fddf-xssbs\" (UID: \"e4433372-00c8-4e01-8813-4fed0ea54158\") " pod="openstack/heat-engine-6d8c5fddf-xssbs" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.595061 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-qw2ns"] Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.652117 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-854d558954-hhsmv"] Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.653643 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-854d558954-hhsmv" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.661313 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.679067 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-config\") pod \"dnsmasq-dns-7756b9d78c-qw2ns\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.679099 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-qw2ns\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.679155 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-qw2ns\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.679187 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6w5r\" (UniqueName: \"kubernetes.io/projected/e4433372-00c8-4e01-8813-4fed0ea54158-kube-api-access-q6w5r\") pod \"heat-engine-6d8c5fddf-xssbs\" (UID: \"e4433372-00c8-4e01-8813-4fed0ea54158\") " pod="openstack/heat-engine-6d8c5fddf-xssbs" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.679209 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-qw2ns\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.679236 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqc8x\" (UniqueName: \"kubernetes.io/projected/02c53bcf-5b6a-4bc5-b677-b01a827904ff-kube-api-access-pqc8x\") pod \"dnsmasq-dns-7756b9d78c-qw2ns\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.679257 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-qw2ns\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.679284 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4433372-00c8-4e01-8813-4fed0ea54158-config-data-custom\") pod \"heat-engine-6d8c5fddf-xssbs\" (UID: \"e4433372-00c8-4e01-8813-4fed0ea54158\") " pod="openstack/heat-engine-6d8c5fddf-xssbs" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.679317 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4433372-00c8-4e01-8813-4fed0ea54158-combined-ca-bundle\") pod \"heat-engine-6d8c5fddf-xssbs\" (UID: \"e4433372-00c8-4e01-8813-4fed0ea54158\") " pod="openstack/heat-engine-6d8c5fddf-xssbs" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.679356 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4433372-00c8-4e01-8813-4fed0ea54158-config-data\") pod \"heat-engine-6d8c5fddf-xssbs\" (UID: \"e4433372-00c8-4e01-8813-4fed0ea54158\") " pod="openstack/heat-engine-6d8c5fddf-xssbs" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.692773 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4433372-00c8-4e01-8813-4fed0ea54158-config-data-custom\") pod \"heat-engine-6d8c5fddf-xssbs\" (UID: \"e4433372-00c8-4e01-8813-4fed0ea54158\") " pod="openstack/heat-engine-6d8c5fddf-xssbs" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.700730 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-854d558954-hhsmv"] Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.702818 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4433372-00c8-4e01-8813-4fed0ea54158-config-data\") pod \"heat-engine-6d8c5fddf-xssbs\" (UID: \"e4433372-00c8-4e01-8813-4fed0ea54158\") " pod="openstack/heat-engine-6d8c5fddf-xssbs" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.716219 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4433372-00c8-4e01-8813-4fed0ea54158-combined-ca-bundle\") pod \"heat-engine-6d8c5fddf-xssbs\" (UID: \"e4433372-00c8-4e01-8813-4fed0ea54158\") " pod="openstack/heat-engine-6d8c5fddf-xssbs" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.720519 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6w5r\" (UniqueName: \"kubernetes.io/projected/e4433372-00c8-4e01-8813-4fed0ea54158-kube-api-access-q6w5r\") pod \"heat-engine-6d8c5fddf-xssbs\" (UID: \"e4433372-00c8-4e01-8813-4fed0ea54158\") " pod="openstack/heat-engine-6d8c5fddf-xssbs" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.732225 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-78c78d7bb-4kb6m"] Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.768517 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-78c78d7bb-4kb6m"] Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.768671 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78c78d7bb-4kb6m" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.782196 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.796399 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqc8x\" (UniqueName: \"kubernetes.io/projected/02c53bcf-5b6a-4bc5-b677-b01a827904ff-kube-api-access-pqc8x\") pod \"dnsmasq-dns-7756b9d78c-qw2ns\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.796711 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-qw2ns\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.797015 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56b43419-bf45-4850-996c-276b31e090d3-config-data-custom\") pod \"heat-cfnapi-854d558954-hhsmv\" (UID: \"56b43419-bf45-4850-996c-276b31e090d3\") " pod="openstack/heat-cfnapi-854d558954-hhsmv" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.797162 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrdpv\" (UniqueName: \"kubernetes.io/projected/56b43419-bf45-4850-996c-276b31e090d3-kube-api-access-vrdpv\") pod \"heat-cfnapi-854d558954-hhsmv\" (UID: \"56b43419-bf45-4850-996c-276b31e090d3\") " pod="openstack/heat-cfnapi-854d558954-hhsmv" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.797292 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56b43419-bf45-4850-996c-276b31e090d3-combined-ca-bundle\") pod \"heat-cfnapi-854d558954-hhsmv\" (UID: \"56b43419-bf45-4850-996c-276b31e090d3\") " pod="openstack/heat-cfnapi-854d558954-hhsmv" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.797507 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56b43419-bf45-4850-996c-276b31e090d3-config-data\") pod \"heat-cfnapi-854d558954-hhsmv\" (UID: \"56b43419-bf45-4850-996c-276b31e090d3\") " pod="openstack/heat-cfnapi-854d558954-hhsmv" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.797626 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-config\") pod \"dnsmasq-dns-7756b9d78c-qw2ns\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.797754 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-qw2ns\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.797973 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-qw2ns\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.798828 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-qw2ns\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.802266 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-qw2ns\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.803335 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-qw2ns\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.804673 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-qw2ns\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.804735 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-config\") pod \"dnsmasq-dns-7756b9d78c-qw2ns\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.805735 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-qw2ns\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.832496 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6d8c5fddf-xssbs" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.858763 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqc8x\" (UniqueName: \"kubernetes.io/projected/02c53bcf-5b6a-4bc5-b677-b01a827904ff-kube-api-access-pqc8x\") pod \"dnsmasq-dns-7756b9d78c-qw2ns\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.902929 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b6e5ec6-9488-4be9-852d-defd9556b4ca-config-data\") pod \"heat-api-78c78d7bb-4kb6m\" (UID: \"3b6e5ec6-9488-4be9-852d-defd9556b4ca\") " pod="openstack/heat-api-78c78d7bb-4kb6m" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.903329 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56b43419-bf45-4850-996c-276b31e090d3-config-data-custom\") pod \"heat-cfnapi-854d558954-hhsmv\" (UID: \"56b43419-bf45-4850-996c-276b31e090d3\") " pod="openstack/heat-cfnapi-854d558954-hhsmv" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.903472 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b6e5ec6-9488-4be9-852d-defd9556b4ca-config-data-custom\") pod \"heat-api-78c78d7bb-4kb6m\" (UID: \"3b6e5ec6-9488-4be9-852d-defd9556b4ca\") " pod="openstack/heat-api-78c78d7bb-4kb6m" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.903628 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrdpv\" (UniqueName: \"kubernetes.io/projected/56b43419-bf45-4850-996c-276b31e090d3-kube-api-access-vrdpv\") pod \"heat-cfnapi-854d558954-hhsmv\" (UID: \"56b43419-bf45-4850-996c-276b31e090d3\") " pod="openstack/heat-cfnapi-854d558954-hhsmv" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.903766 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56b43419-bf45-4850-996c-276b31e090d3-combined-ca-bundle\") pod \"heat-cfnapi-854d558954-hhsmv\" (UID: \"56b43419-bf45-4850-996c-276b31e090d3\") " pod="openstack/heat-cfnapi-854d558954-hhsmv" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.903957 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56b43419-bf45-4850-996c-276b31e090d3-config-data\") pod \"heat-cfnapi-854d558954-hhsmv\" (UID: \"56b43419-bf45-4850-996c-276b31e090d3\") " pod="openstack/heat-cfnapi-854d558954-hhsmv" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.904056 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhs9t\" (UniqueName: \"kubernetes.io/projected/3b6e5ec6-9488-4be9-852d-defd9556b4ca-kube-api-access-bhs9t\") pod \"heat-api-78c78d7bb-4kb6m\" (UID: \"3b6e5ec6-9488-4be9-852d-defd9556b4ca\") " pod="openstack/heat-api-78c78d7bb-4kb6m" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.904167 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b6e5ec6-9488-4be9-852d-defd9556b4ca-combined-ca-bundle\") pod \"heat-api-78c78d7bb-4kb6m\" (UID: \"3b6e5ec6-9488-4be9-852d-defd9556b4ca\") " pod="openstack/heat-api-78c78d7bb-4kb6m" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.919051 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.919124 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56b43419-bf45-4850-996c-276b31e090d3-combined-ca-bundle\") pod \"heat-cfnapi-854d558954-hhsmv\" (UID: \"56b43419-bf45-4850-996c-276b31e090d3\") " pod="openstack/heat-cfnapi-854d558954-hhsmv" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.919901 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56b43419-bf45-4850-996c-276b31e090d3-config-data-custom\") pod \"heat-cfnapi-854d558954-hhsmv\" (UID: \"56b43419-bf45-4850-996c-276b31e090d3\") " pod="openstack/heat-cfnapi-854d558954-hhsmv" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.935888 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56b43419-bf45-4850-996c-276b31e090d3-config-data\") pod \"heat-cfnapi-854d558954-hhsmv\" (UID: \"56b43419-bf45-4850-996c-276b31e090d3\") " pod="openstack/heat-cfnapi-854d558954-hhsmv" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.948944 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrdpv\" (UniqueName: \"kubernetes.io/projected/56b43419-bf45-4850-996c-276b31e090d3-kube-api-access-vrdpv\") pod \"heat-cfnapi-854d558954-hhsmv\" (UID: \"56b43419-bf45-4850-996c-276b31e090d3\") " pod="openstack/heat-cfnapi-854d558954-hhsmv" Feb 02 13:19:51 crc kubenswrapper[4955]: I0202 13:19:51.978942 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-854d558954-hhsmv" Feb 02 13:19:52 crc kubenswrapper[4955]: I0202 13:19:52.010696 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b6e5ec6-9488-4be9-852d-defd9556b4ca-config-data\") pod \"heat-api-78c78d7bb-4kb6m\" (UID: \"3b6e5ec6-9488-4be9-852d-defd9556b4ca\") " pod="openstack/heat-api-78c78d7bb-4kb6m" Feb 02 13:19:52 crc kubenswrapper[4955]: I0202 13:19:52.010788 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b6e5ec6-9488-4be9-852d-defd9556b4ca-config-data-custom\") pod \"heat-api-78c78d7bb-4kb6m\" (UID: \"3b6e5ec6-9488-4be9-852d-defd9556b4ca\") " pod="openstack/heat-api-78c78d7bb-4kb6m" Feb 02 13:19:52 crc kubenswrapper[4955]: I0202 13:19:52.010897 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhs9t\" (UniqueName: \"kubernetes.io/projected/3b6e5ec6-9488-4be9-852d-defd9556b4ca-kube-api-access-bhs9t\") pod \"heat-api-78c78d7bb-4kb6m\" (UID: \"3b6e5ec6-9488-4be9-852d-defd9556b4ca\") " pod="openstack/heat-api-78c78d7bb-4kb6m" Feb 02 13:19:52 crc kubenswrapper[4955]: I0202 13:19:52.010937 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b6e5ec6-9488-4be9-852d-defd9556b4ca-combined-ca-bundle\") pod \"heat-api-78c78d7bb-4kb6m\" (UID: \"3b6e5ec6-9488-4be9-852d-defd9556b4ca\") " pod="openstack/heat-api-78c78d7bb-4kb6m" Feb 02 13:19:52 crc kubenswrapper[4955]: I0202 13:19:52.025919 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b6e5ec6-9488-4be9-852d-defd9556b4ca-combined-ca-bundle\") pod \"heat-api-78c78d7bb-4kb6m\" (UID: \"3b6e5ec6-9488-4be9-852d-defd9556b4ca\") " pod="openstack/heat-api-78c78d7bb-4kb6m" Feb 02 13:19:52 crc kubenswrapper[4955]: I0202 13:19:52.028221 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b6e5ec6-9488-4be9-852d-defd9556b4ca-config-data-custom\") pod \"heat-api-78c78d7bb-4kb6m\" (UID: \"3b6e5ec6-9488-4be9-852d-defd9556b4ca\") " pod="openstack/heat-api-78c78d7bb-4kb6m" Feb 02 13:19:52 crc kubenswrapper[4955]: I0202 13:19:52.030679 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b6e5ec6-9488-4be9-852d-defd9556b4ca-config-data\") pod \"heat-api-78c78d7bb-4kb6m\" (UID: \"3b6e5ec6-9488-4be9-852d-defd9556b4ca\") " pod="openstack/heat-api-78c78d7bb-4kb6m" Feb 02 13:19:52 crc kubenswrapper[4955]: I0202 13:19:52.040120 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhs9t\" (UniqueName: \"kubernetes.io/projected/3b6e5ec6-9488-4be9-852d-defd9556b4ca-kube-api-access-bhs9t\") pod \"heat-api-78c78d7bb-4kb6m\" (UID: \"3b6e5ec6-9488-4be9-852d-defd9556b4ca\") " pod="openstack/heat-api-78c78d7bb-4kb6m" Feb 02 13:19:52 crc kubenswrapper[4955]: I0202 13:19:52.169845 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78c78d7bb-4kb6m" Feb 02 13:19:52 crc kubenswrapper[4955]: I0202 13:19:52.223915 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5556dd9bdb-vvgs6" podUID="dbd94f38-a41d-4069-976f-fb347698edd6" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": dial tcp 10.217.0.164:9311: connect: connection refused" Feb 02 13:19:52 crc kubenswrapper[4955]: I0202 13:19:52.223976 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5556dd9bdb-vvgs6" podUID="dbd94f38-a41d-4069-976f-fb347698edd6" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": dial tcp 10.217.0.164:9311: connect: connection refused" Feb 02 13:19:52 crc kubenswrapper[4955]: I0202 13:19:52.724716 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:19:52 crc kubenswrapper[4955]: I0202 13:19:52.726379 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06f5162a-d775-4658-b6a3-10e528720bcf" containerName="ceilometer-central-agent" containerID="cri-o://43dc5f0a7f8f526a75705c4738723e20ae7abcebe886c620a7ad5f69ee7296a3" gracePeriod=30 Feb 02 13:19:52 crc kubenswrapper[4955]: I0202 13:19:52.726440 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06f5162a-d775-4658-b6a3-10e528720bcf" containerName="sg-core" containerID="cri-o://b98cf6590b4e80b3df7715ddda4870ff3a8e18b54ca12d03c80affd54b2b09d6" gracePeriod=30 Feb 02 13:19:52 crc kubenswrapper[4955]: I0202 13:19:52.726440 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06f5162a-d775-4658-b6a3-10e528720bcf" containerName="proxy-httpd" containerID="cri-o://ba955a1496658639e275b8b6d9b51ed88e6f0e8864fc5de8675f0fd6180d98eb" gracePeriod=30 Feb 02 13:19:52 crc kubenswrapper[4955]: I0202 13:19:52.726479 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06f5162a-d775-4658-b6a3-10e528720bcf" containerName="ceilometer-notification-agent" containerID="cri-o://40fa117856d5155f71e264fade9f24cd4016b98b2e636e823597d994d4468d7c" gracePeriod=30 Feb 02 13:19:52 crc kubenswrapper[4955]: I0202 13:19:52.735406 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 13:19:52 crc kubenswrapper[4955]: I0202 13:19:52.774472 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:19:52 crc kubenswrapper[4955]: I0202 13:19:52.774722 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="39986812-66de-430e-a32f-95242971ddc6" containerName="glance-log" containerID="cri-o://4dd195906a79b457df40eec250a447bc000ffedbad646e88caba8037efa32a1d" gracePeriod=30 Feb 02 13:19:52 crc kubenswrapper[4955]: I0202 13:19:52.774980 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="39986812-66de-430e-a32f-95242971ddc6" containerName="glance-httpd" containerID="cri-o://965cdc793093de7895abb18c5143614f7bc787de6e41c8bb1d77a7a85cd49c52" gracePeriod=30 Feb 02 13:19:53 crc kubenswrapper[4955]: I0202 13:19:53.572207 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 13:19:53 crc kubenswrapper[4955]: I0202 13:19:53.826225 4955 generic.go:334] "Generic (PLEG): container finished" podID="06f5162a-d775-4658-b6a3-10e528720bcf" containerID="ba955a1496658639e275b8b6d9b51ed88e6f0e8864fc5de8675f0fd6180d98eb" exitCode=0 Feb 02 13:19:53 crc kubenswrapper[4955]: I0202 13:19:53.826264 4955 generic.go:334] "Generic (PLEG): container finished" podID="06f5162a-d775-4658-b6a3-10e528720bcf" containerID="b98cf6590b4e80b3df7715ddda4870ff3a8e18b54ca12d03c80affd54b2b09d6" exitCode=2 Feb 02 13:19:53 crc kubenswrapper[4955]: I0202 13:19:53.826278 4955 generic.go:334] "Generic (PLEG): container finished" podID="06f5162a-d775-4658-b6a3-10e528720bcf" containerID="40fa117856d5155f71e264fade9f24cd4016b98b2e636e823597d994d4468d7c" exitCode=0 Feb 02 13:19:53 crc kubenswrapper[4955]: I0202 13:19:53.826287 4955 generic.go:334] "Generic (PLEG): container finished" podID="06f5162a-d775-4658-b6a3-10e528720bcf" containerID="43dc5f0a7f8f526a75705c4738723e20ae7abcebe886c620a7ad5f69ee7296a3" exitCode=0 Feb 02 13:19:53 crc kubenswrapper[4955]: I0202 13:19:53.826309 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f5162a-d775-4658-b6a3-10e528720bcf","Type":"ContainerDied","Data":"ba955a1496658639e275b8b6d9b51ed88e6f0e8864fc5de8675f0fd6180d98eb"} Feb 02 13:19:53 crc kubenswrapper[4955]: I0202 13:19:53.826350 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f5162a-d775-4658-b6a3-10e528720bcf","Type":"ContainerDied","Data":"b98cf6590b4e80b3df7715ddda4870ff3a8e18b54ca12d03c80affd54b2b09d6"} Feb 02 13:19:53 crc kubenswrapper[4955]: I0202 13:19:53.826363 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f5162a-d775-4658-b6a3-10e528720bcf","Type":"ContainerDied","Data":"40fa117856d5155f71e264fade9f24cd4016b98b2e636e823597d994d4468d7c"} Feb 02 13:19:53 crc kubenswrapper[4955]: I0202 13:19:53.826373 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f5162a-d775-4658-b6a3-10e528720bcf","Type":"ContainerDied","Data":"43dc5f0a7f8f526a75705c4738723e20ae7abcebe886c620a7ad5f69ee7296a3"} Feb 02 13:19:53 crc kubenswrapper[4955]: I0202 13:19:53.830073 4955 generic.go:334] "Generic (PLEG): container finished" podID="39986812-66de-430e-a32f-95242971ddc6" containerID="4dd195906a79b457df40eec250a447bc000ffedbad646e88caba8037efa32a1d" exitCode=143 Feb 02 13:19:53 crc kubenswrapper[4955]: I0202 13:19:53.830103 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"39986812-66de-430e-a32f-95242971ddc6","Type":"ContainerDied","Data":"4dd195906a79b457df40eec250a447bc000ffedbad646e88caba8037efa32a1d"} Feb 02 13:19:53 crc kubenswrapper[4955]: I0202 13:19:53.865820 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:19:53 crc kubenswrapper[4955]: I0202 13:19:53.866087 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6d3ca271-8968-4a48-a1a5-be53a4038119" containerName="glance-log" containerID="cri-o://0391500e009eaf107990fd8329b2872b76890f0bce585e0b2f28ad0f937f6de5" gracePeriod=30 Feb 02 13:19:53 crc kubenswrapper[4955]: I0202 13:19:53.866176 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6d3ca271-8968-4a48-a1a5-be53a4038119" containerName="glance-httpd" containerID="cri-o://f5dfba988f2e284ee4ed01ac1576ab7a510fe8949503b30f3d4961d1d8ea0464" gracePeriod=30 Feb 02 13:19:54 crc kubenswrapper[4955]: I0202 13:19:54.840829 4955 generic.go:334] "Generic (PLEG): container finished" podID="6d3ca271-8968-4a48-a1a5-be53a4038119" containerID="0391500e009eaf107990fd8329b2872b76890f0bce585e0b2f28ad0f937f6de5" exitCode=143 Feb 02 13:19:54 crc kubenswrapper[4955]: I0202 13:19:54.840871 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6d3ca271-8968-4a48-a1a5-be53a4038119","Type":"ContainerDied","Data":"0391500e009eaf107990fd8329b2872b76890f0bce585e0b2f28ad0f937f6de5"} Feb 02 13:19:55 crc kubenswrapper[4955]: I0202 13:19:55.936881 4955 generic.go:334] "Generic (PLEG): container finished" podID="1dee501b-e122-4870-b3bb-4096d3dcc975" containerID="6fb377675d21b30988b781602e0ff194c6d3a5ec8482f3ed3372259fb696f250" exitCode=0 Feb 02 13:19:55 crc kubenswrapper[4955]: I0202 13:19:55.937499 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-584b87959d-4rkvv" event={"ID":"1dee501b-e122-4870-b3bb-4096d3dcc975","Type":"ContainerDied","Data":"6fb377675d21b30988b781602e0ff194c6d3a5ec8482f3ed3372259fb696f250"} Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.105406 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.215080 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r5q8\" (UniqueName: \"kubernetes.io/projected/06f5162a-d775-4658-b6a3-10e528720bcf-kube-api-access-2r5q8\") pod \"06f5162a-d775-4658-b6a3-10e528720bcf\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.215804 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-combined-ca-bundle\") pod \"06f5162a-d775-4658-b6a3-10e528720bcf\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.215891 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f5162a-d775-4658-b6a3-10e528720bcf-log-httpd\") pod \"06f5162a-d775-4658-b6a3-10e528720bcf\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.215948 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-scripts\") pod \"06f5162a-d775-4658-b6a3-10e528720bcf\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.215981 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-config-data\") pod \"06f5162a-d775-4658-b6a3-10e528720bcf\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.216049 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f5162a-d775-4658-b6a3-10e528720bcf-run-httpd\") pod \"06f5162a-d775-4658-b6a3-10e528720bcf\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.216105 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-sg-core-conf-yaml\") pod \"06f5162a-d775-4658-b6a3-10e528720bcf\" (UID: \"06f5162a-d775-4658-b6a3-10e528720bcf\") " Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.219452 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06f5162a-d775-4658-b6a3-10e528720bcf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "06f5162a-d775-4658-b6a3-10e528720bcf" (UID: "06f5162a-d775-4658-b6a3-10e528720bcf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.219940 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06f5162a-d775-4658-b6a3-10e528720bcf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "06f5162a-d775-4658-b6a3-10e528720bcf" (UID: "06f5162a-d775-4658-b6a3-10e528720bcf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.232312 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06f5162a-d775-4658-b6a3-10e528720bcf-kube-api-access-2r5q8" (OuterVolumeSpecName: "kube-api-access-2r5q8") pod "06f5162a-d775-4658-b6a3-10e528720bcf" (UID: "06f5162a-d775-4658-b6a3-10e528720bcf"). InnerVolumeSpecName "kube-api-access-2r5q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.243055 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-scripts" (OuterVolumeSpecName: "scripts") pod "06f5162a-d775-4658-b6a3-10e528720bcf" (UID: "06f5162a-d775-4658-b6a3-10e528720bcf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.279746 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "06f5162a-d775-4658-b6a3-10e528720bcf" (UID: "06f5162a-d775-4658-b6a3-10e528720bcf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.318671 4955 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f5162a-d775-4658-b6a3-10e528720bcf-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.318709 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.318721 4955 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f5162a-d775-4658-b6a3-10e528720bcf-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.318733 4955 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.318747 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r5q8\" (UniqueName: \"kubernetes.io/projected/06f5162a-d775-4658-b6a3-10e528720bcf-kube-api-access-2r5q8\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.350654 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06f5162a-d775-4658-b6a3-10e528720bcf" (UID: "06f5162a-d775-4658-b6a3-10e528720bcf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.406707 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-config-data" (OuterVolumeSpecName: "config-data") pod "06f5162a-d775-4658-b6a3-10e528720bcf" (UID: "06f5162a-d775-4658-b6a3-10e528720bcf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.420507 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.420541 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f5162a-d775-4658-b6a3-10e528720bcf-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.440025 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5556dd9bdb-vvgs6" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.522459 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbd94f38-a41d-4069-976f-fb347698edd6-config-data\") pod \"dbd94f38-a41d-4069-976f-fb347698edd6\" (UID: \"dbd94f38-a41d-4069-976f-fb347698edd6\") " Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.522539 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbd94f38-a41d-4069-976f-fb347698edd6-combined-ca-bundle\") pod \"dbd94f38-a41d-4069-976f-fb347698edd6\" (UID: \"dbd94f38-a41d-4069-976f-fb347698edd6\") " Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.522800 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbd94f38-a41d-4069-976f-fb347698edd6-config-data-custom\") pod \"dbd94f38-a41d-4069-976f-fb347698edd6\" (UID: \"dbd94f38-a41d-4069-976f-fb347698edd6\") " Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.522843 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l2nj\" (UniqueName: \"kubernetes.io/projected/dbd94f38-a41d-4069-976f-fb347698edd6-kube-api-access-7l2nj\") pod \"dbd94f38-a41d-4069-976f-fb347698edd6\" (UID: \"dbd94f38-a41d-4069-976f-fb347698edd6\") " Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.522877 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbd94f38-a41d-4069-976f-fb347698edd6-logs\") pod \"dbd94f38-a41d-4069-976f-fb347698edd6\" (UID: \"dbd94f38-a41d-4069-976f-fb347698edd6\") " Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.523801 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbd94f38-a41d-4069-976f-fb347698edd6-logs" (OuterVolumeSpecName: "logs") pod "dbd94f38-a41d-4069-976f-fb347698edd6" (UID: "dbd94f38-a41d-4069-976f-fb347698edd6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.530090 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbd94f38-a41d-4069-976f-fb347698edd6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dbd94f38-a41d-4069-976f-fb347698edd6" (UID: "dbd94f38-a41d-4069-976f-fb347698edd6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.539596 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-584b87959d-4rkvv" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.539869 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbd94f38-a41d-4069-976f-fb347698edd6-kube-api-access-7l2nj" (OuterVolumeSpecName: "kube-api-access-7l2nj") pod "dbd94f38-a41d-4069-976f-fb347698edd6" (UID: "dbd94f38-a41d-4069-976f-fb347698edd6"). InnerVolumeSpecName "kube-api-access-7l2nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.547478 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-qw2ns"] Feb 02 13:19:56 crc kubenswrapper[4955]: W0202 13:19:56.551884 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02c53bcf_5b6a_4bc5_b677_b01a827904ff.slice/crio-d8456db8a0114eea7b69c2469cec72a24c164b11602ff1bffd4a30fe22ac23d3 WatchSource:0}: Error finding container d8456db8a0114eea7b69c2469cec72a24c164b11602ff1bffd4a30fe22ac23d3: Status 404 returned error can't find the container with id d8456db8a0114eea7b69c2469cec72a24c164b11602ff1bffd4a30fe22ac23d3 Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.591266 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbd94f38-a41d-4069-976f-fb347698edd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbd94f38-a41d-4069-976f-fb347698edd6" (UID: "dbd94f38-a41d-4069-976f-fb347698edd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.598078 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbd94f38-a41d-4069-976f-fb347698edd6-config-data" (OuterVolumeSpecName: "config-data") pod "dbd94f38-a41d-4069-976f-fb347698edd6" (UID: "dbd94f38-a41d-4069-976f-fb347698edd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.630787 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n76j\" (UniqueName: \"kubernetes.io/projected/1dee501b-e122-4870-b3bb-4096d3dcc975-kube-api-access-4n76j\") pod \"1dee501b-e122-4870-b3bb-4096d3dcc975\" (UID: \"1dee501b-e122-4870-b3bb-4096d3dcc975\") " Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.630902 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-combined-ca-bundle\") pod \"1dee501b-e122-4870-b3bb-4096d3dcc975\" (UID: \"1dee501b-e122-4870-b3bb-4096d3dcc975\") " Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.631098 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-httpd-config\") pod \"1dee501b-e122-4870-b3bb-4096d3dcc975\" (UID: \"1dee501b-e122-4870-b3bb-4096d3dcc975\") " Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.631161 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-ovndb-tls-certs\") pod \"1dee501b-e122-4870-b3bb-4096d3dcc975\" (UID: \"1dee501b-e122-4870-b3bb-4096d3dcc975\") " Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.631182 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-config\") pod \"1dee501b-e122-4870-b3bb-4096d3dcc975\" (UID: \"1dee501b-e122-4870-b3bb-4096d3dcc975\") " Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.634526 4955 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbd94f38-a41d-4069-976f-fb347698edd6-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.634580 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l2nj\" (UniqueName: \"kubernetes.io/projected/dbd94f38-a41d-4069-976f-fb347698edd6-kube-api-access-7l2nj\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.634592 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbd94f38-a41d-4069-976f-fb347698edd6-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.634601 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbd94f38-a41d-4069-976f-fb347698edd6-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.634612 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbd94f38-a41d-4069-976f-fb347698edd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.640331 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dee501b-e122-4870-b3bb-4096d3dcc975-kube-api-access-4n76j" (OuterVolumeSpecName: "kube-api-access-4n76j") pod "1dee501b-e122-4870-b3bb-4096d3dcc975" (UID: "1dee501b-e122-4870-b3bb-4096d3dcc975"). InnerVolumeSpecName "kube-api-access-4n76j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.641931 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "1dee501b-e122-4870-b3bb-4096d3dcc975" (UID: "1dee501b-e122-4870-b3bb-4096d3dcc975"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.702355 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-config" (OuterVolumeSpecName: "config") pod "1dee501b-e122-4870-b3bb-4096d3dcc975" (UID: "1dee501b-e122-4870-b3bb-4096d3dcc975"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.718798 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1dee501b-e122-4870-b3bb-4096d3dcc975" (UID: "1dee501b-e122-4870-b3bb-4096d3dcc975"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.738494 4955 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.738523 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.738535 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n76j\" (UniqueName: \"kubernetes.io/projected/1dee501b-e122-4870-b3bb-4096d3dcc975-kube-api-access-4n76j\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.738545 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.750690 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "1dee501b-e122-4870-b3bb-4096d3dcc975" (UID: "1dee501b-e122-4870-b3bb-4096d3dcc975"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.840493 4955 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dee501b-e122-4870-b3bb-4096d3dcc975-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:56 crc kubenswrapper[4955]: W0202 13:19:56.895674 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b6e5ec6_9488_4be9_852d_defd9556b4ca.slice/crio-f364badde873b340901f5e00c87d7939b208e8d2c9630d64852e64727c7e851a WatchSource:0}: Error finding container f364badde873b340901f5e00c87d7939b208e8d2c9630d64852e64727c7e851a: Status 404 returned error can't find the container with id f364badde873b340901f5e00c87d7939b208e8d2c9630d64852e64727c7e851a Feb 02 13:19:56 crc kubenswrapper[4955]: W0202 13:19:56.898080 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56b43419_bf45_4850_996c_276b31e090d3.slice/crio-097cb7adc22ef71ed162256e951abea0719c8c37397b37dd00fa9a5202129451 WatchSource:0}: Error finding container 097cb7adc22ef71ed162256e951abea0719c8c37397b37dd00fa9a5202129451: Status 404 returned error can't find the container with id 097cb7adc22ef71ed162256e951abea0719c8c37397b37dd00fa9a5202129451 Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.901188 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6d8c5fddf-xssbs"] Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.917150 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-78c78d7bb-4kb6m"] Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.924993 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-854d558954-hhsmv"] Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.952026 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6d8c5fddf-xssbs" event={"ID":"e4433372-00c8-4e01-8813-4fed0ea54158","Type":"ContainerStarted","Data":"8ee1daeb525406605ae2afbfe7d16018dadf517e30c20f20901b9280f4932f0f"} Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.962854 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-854d558954-hhsmv" event={"ID":"56b43419-bf45-4850-996c-276b31e090d3","Type":"ContainerStarted","Data":"097cb7adc22ef71ed162256e951abea0719c8c37397b37dd00fa9a5202129451"} Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.991288 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-79fb55657c-85sjk"] Feb 02 13:19:56 crc kubenswrapper[4955]: I0202 13:19:56.998361 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" event={"ID":"02c53bcf-5b6a-4bc5-b677-b01a827904ff","Type":"ContainerStarted","Data":"d8456db8a0114eea7b69c2469cec72a24c164b11602ff1bffd4a30fe22ac23d3"} Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.020065 4955 generic.go:334] "Generic (PLEG): container finished" podID="39986812-66de-430e-a32f-95242971ddc6" containerID="965cdc793093de7895abb18c5143614f7bc787de6e41c8bb1d77a7a85cd49c52" exitCode=0 Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.020151 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"39986812-66de-430e-a32f-95242971ddc6","Type":"ContainerDied","Data":"965cdc793093de7895abb18c5143614f7bc787de6e41c8bb1d77a7a85cd49c52"} Feb 02 13:19:57 crc kubenswrapper[4955]: W0202 13:19:57.028752 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91116c53_5321_4170_9ec0_1c0588b81355.slice/crio-cda8f399790f1a3a365369a82818ec184736d49f1d8cbe64d82a2325aa228533 WatchSource:0}: Error finding container cda8f399790f1a3a365369a82818ec184736d49f1d8cbe64d82a2325aa228533: Status 404 returned error can't find the container with id cda8f399790f1a3a365369a82818ec184736d49f1d8cbe64d82a2325aa228533 Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.041901 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"19fa3e77-422a-425a-8e53-cefd5d880462","Type":"ContainerStarted","Data":"6e7845e3e261d8aa807e1320bcf6b4a6f5baad4a239e3d74acac3a47b84e30a1"} Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.067212 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.8747222159999999 podStartE2EDuration="16.067194385s" podCreationTimestamp="2026-02-02 13:19:41 +0000 UTC" firstStartedPulling="2026-02-02 13:19:42.169717746 +0000 UTC m=+1033.082054196" lastFinishedPulling="2026-02-02 13:19:56.362189915 +0000 UTC m=+1047.274526365" observedRunningTime="2026-02-02 13:19:57.067158134 +0000 UTC m=+1047.979494594" watchObservedRunningTime="2026-02-02 13:19:57.067194385 +0000 UTC m=+1047.979530825" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.081384 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.093356 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f5162a-d775-4658-b6a3-10e528720bcf","Type":"ContainerDied","Data":"c2b2f36ec693d26f0e87bc9e4e7356b2e071acb825520796b36ede82b66e0305"} Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.093432 4955 scope.go:117] "RemoveContainer" containerID="ba955a1496658639e275b8b6d9b51ed88e6f0e8864fc5de8675f0fd6180d98eb" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.107203 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78c78d7bb-4kb6m" event={"ID":"3b6e5ec6-9488-4be9-852d-defd9556b4ca","Type":"ContainerStarted","Data":"f364badde873b340901f5e00c87d7939b208e8d2c9630d64852e64727c7e851a"} Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.126313 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-584b87959d-4rkvv" event={"ID":"1dee501b-e122-4870-b3bb-4096d3dcc975","Type":"ContainerDied","Data":"dcf6a7ffd22950608c2af4eee9a8954041c4968fc1d752ec7ee0fbe11cbda077"} Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.126415 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-584b87959d-4rkvv" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.161110 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5556dd9bdb-vvgs6" event={"ID":"dbd94f38-a41d-4069-976f-fb347698edd6","Type":"ContainerDied","Data":"dee0009f92f6c4015a2d7b481e9d713ae051de32b6b6e68874a77018cc5cced1"} Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.161215 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5556dd9bdb-vvgs6" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.189983 4955 scope.go:117] "RemoveContainer" containerID="b98cf6590b4e80b3df7715ddda4870ff3a8e18b54ca12d03c80affd54b2b09d6" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.193673 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.216909 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.228837 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:19:57 crc kubenswrapper[4955]: E0202 13:19:57.229276 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbd94f38-a41d-4069-976f-fb347698edd6" containerName="barbican-api" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.229297 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbd94f38-a41d-4069-976f-fb347698edd6" containerName="barbican-api" Feb 02 13:19:57 crc kubenswrapper[4955]: E0202 13:19:57.229319 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f5162a-d775-4658-b6a3-10e528720bcf" containerName="sg-core" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.229328 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f5162a-d775-4658-b6a3-10e528720bcf" containerName="sg-core" Feb 02 13:19:57 crc kubenswrapper[4955]: E0202 13:19:57.229346 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dee501b-e122-4870-b3bb-4096d3dcc975" containerName="neutron-httpd" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.229353 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dee501b-e122-4870-b3bb-4096d3dcc975" containerName="neutron-httpd" Feb 02 13:19:57 crc kubenswrapper[4955]: E0202 13:19:57.229364 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f5162a-d775-4658-b6a3-10e528720bcf" containerName="ceilometer-notification-agent" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.229370 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f5162a-d775-4658-b6a3-10e528720bcf" containerName="ceilometer-notification-agent" Feb 02 13:19:57 crc kubenswrapper[4955]: E0202 13:19:57.229388 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f5162a-d775-4658-b6a3-10e528720bcf" containerName="ceilometer-central-agent" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.229396 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f5162a-d775-4658-b6a3-10e528720bcf" containerName="ceilometer-central-agent" Feb 02 13:19:57 crc kubenswrapper[4955]: E0202 13:19:57.229416 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbd94f38-a41d-4069-976f-fb347698edd6" containerName="barbican-api-log" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.229425 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbd94f38-a41d-4069-976f-fb347698edd6" containerName="barbican-api-log" Feb 02 13:19:57 crc kubenswrapper[4955]: E0202 13:19:57.229435 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dee501b-e122-4870-b3bb-4096d3dcc975" containerName="neutron-api" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.229443 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dee501b-e122-4870-b3bb-4096d3dcc975" containerName="neutron-api" Feb 02 13:19:57 crc kubenswrapper[4955]: E0202 13:19:57.229456 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f5162a-d775-4658-b6a3-10e528720bcf" containerName="proxy-httpd" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.229462 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f5162a-d775-4658-b6a3-10e528720bcf" containerName="proxy-httpd" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.229701 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dee501b-e122-4870-b3bb-4096d3dcc975" containerName="neutron-httpd" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.229714 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dee501b-e122-4870-b3bb-4096d3dcc975" containerName="neutron-api" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.229724 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="06f5162a-d775-4658-b6a3-10e528720bcf" containerName="ceilometer-central-agent" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.229732 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="06f5162a-d775-4658-b6a3-10e528720bcf" containerName="ceilometer-notification-agent" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.229748 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbd94f38-a41d-4069-976f-fb347698edd6" containerName="barbican-api-log" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.229759 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="06f5162a-d775-4658-b6a3-10e528720bcf" containerName="proxy-httpd" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.229772 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="06f5162a-d775-4658-b6a3-10e528720bcf" containerName="sg-core" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.229793 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbd94f38-a41d-4069-976f-fb347698edd6" containerName="barbican-api" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.231681 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.242342 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-q76t2"] Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.243877 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q76t2" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.251138 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.251337 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.262666 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.302535 4955 scope.go:117] "RemoveContainer" containerID="40fa117856d5155f71e264fade9f24cd4016b98b2e636e823597d994d4468d7c" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.313590 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-q76t2"] Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.324191 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-584b87959d-4rkvv"] Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.346550 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-584b87959d-4rkvv"] Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.376001 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-run-httpd\") pod \"ceilometer-0\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.376083 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.376134 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trq8c\" (UniqueName: \"kubernetes.io/projected/98ea679e-a44a-4bd0-867e-044542b96bbb-kube-api-access-trq8c\") pod \"nova-api-db-create-q76t2\" (UID: \"98ea679e-a44a-4bd0-867e-044542b96bbb\") " pod="openstack/nova-api-db-create-q76t2" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.376199 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.376249 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-scripts\") pod \"ceilometer-0\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.376596 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff24f\" (UniqueName: \"kubernetes.io/projected/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-kube-api-access-ff24f\") pod \"ceilometer-0\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.376718 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-log-httpd\") pod \"ceilometer-0\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.376875 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98ea679e-a44a-4bd0-867e-044542b96bbb-operator-scripts\") pod \"nova-api-db-create-q76t2\" (UID: \"98ea679e-a44a-4bd0-867e-044542b96bbb\") " pod="openstack/nova-api-db-create-q76t2" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.377044 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-config-data\") pod \"ceilometer-0\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.404408 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-75hh6"] Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.411095 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-75hh6" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.419670 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-75hh6"] Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.468375 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-mrzb9"] Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.469700 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mrzb9" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.476929 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-c3ee-account-create-update-7nl28"] Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.478072 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c3ee-account-create-update-7nl28" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.478508 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-config-data\") pod \"ceilometer-0\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.478589 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-run-httpd\") pod \"ceilometer-0\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.478610 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.478638 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trq8c\" (UniqueName: \"kubernetes.io/projected/98ea679e-a44a-4bd0-867e-044542b96bbb-kube-api-access-trq8c\") pod \"nova-api-db-create-q76t2\" (UID: \"98ea679e-a44a-4bd0-867e-044542b96bbb\") " pod="openstack/nova-api-db-create-q76t2" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.478669 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.479143 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-scripts\") pod \"ceilometer-0\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.479332 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff24f\" (UniqueName: \"kubernetes.io/projected/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-kube-api-access-ff24f\") pod \"ceilometer-0\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.479380 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a17c62e-18bb-4a12-9865-8d38c0b7102f-operator-scripts\") pod \"nova-cell0-db-create-75hh6\" (UID: \"2a17c62e-18bb-4a12-9865-8d38c0b7102f\") " pod="openstack/nova-cell0-db-create-75hh6" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.479442 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-log-httpd\") pod \"ceilometer-0\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.479496 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf6qt\" (UniqueName: \"kubernetes.io/projected/2a17c62e-18bb-4a12-9865-8d38c0b7102f-kube-api-access-bf6qt\") pod \"nova-cell0-db-create-75hh6\" (UID: \"2a17c62e-18bb-4a12-9865-8d38c0b7102f\") " pod="openstack/nova-cell0-db-create-75hh6" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.479573 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98ea679e-a44a-4bd0-867e-044542b96bbb-operator-scripts\") pod \"nova-api-db-create-q76t2\" (UID: \"98ea679e-a44a-4bd0-867e-044542b96bbb\") " pod="openstack/nova-api-db-create-q76t2" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.480462 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.480575 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98ea679e-a44a-4bd0-867e-044542b96bbb-operator-scripts\") pod \"nova-api-db-create-q76t2\" (UID: \"98ea679e-a44a-4bd0-867e-044542b96bbb\") " pod="openstack/nova-api-db-create-q76t2" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.481271 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-run-httpd\") pod \"ceilometer-0\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.481519 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-log-httpd\") pod \"ceilometer-0\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.499685 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-scripts\") pod \"ceilometer-0\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.502022 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.502111 4955 scope.go:117] "RemoveContainer" containerID="43dc5f0a7f8f526a75705c4738723e20ae7abcebe886c620a7ad5f69ee7296a3" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.503104 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trq8c\" (UniqueName: \"kubernetes.io/projected/98ea679e-a44a-4bd0-867e-044542b96bbb-kube-api-access-trq8c\") pod \"nova-api-db-create-q76t2\" (UID: \"98ea679e-a44a-4bd0-867e-044542b96bbb\") " pod="openstack/nova-api-db-create-q76t2" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.511884 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff24f\" (UniqueName: \"kubernetes.io/projected/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-kube-api-access-ff24f\") pod \"ceilometer-0\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.513542 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mrzb9"] Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.513878 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.528270 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-config-data\") pod \"ceilometer-0\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.532465 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c3ee-account-create-update-7nl28"] Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.545046 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.551161 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5556dd9bdb-vvgs6"] Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.581159 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-public-tls-certs\") pod \"39986812-66de-430e-a32f-95242971ddc6\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.581272 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-scripts\") pod \"39986812-66de-430e-a32f-95242971ddc6\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.581402 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/39986812-66de-430e-a32f-95242971ddc6-httpd-run\") pod \"39986812-66de-430e-a32f-95242971ddc6\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.581427 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-combined-ca-bundle\") pod \"39986812-66de-430e-a32f-95242971ddc6\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.581491 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"39986812-66de-430e-a32f-95242971ddc6\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.581525 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39986812-66de-430e-a32f-95242971ddc6-logs\") pod \"39986812-66de-430e-a32f-95242971ddc6\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.581551 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-config-data\") pod \"39986812-66de-430e-a32f-95242971ddc6\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.581600 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9sh2\" (UniqueName: \"kubernetes.io/projected/39986812-66de-430e-a32f-95242971ddc6-kube-api-access-z9sh2\") pod \"39986812-66de-430e-a32f-95242971ddc6\" (UID: \"39986812-66de-430e-a32f-95242971ddc6\") " Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.581925 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a17c62e-18bb-4a12-9865-8d38c0b7102f-operator-scripts\") pod \"nova-cell0-db-create-75hh6\" (UID: \"2a17c62e-18bb-4a12-9865-8d38c0b7102f\") " pod="openstack/nova-cell0-db-create-75hh6" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.581983 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf6qt\" (UniqueName: \"kubernetes.io/projected/2a17c62e-18bb-4a12-9865-8d38c0b7102f-kube-api-access-bf6qt\") pod \"nova-cell0-db-create-75hh6\" (UID: \"2a17c62e-18bb-4a12-9865-8d38c0b7102f\") " pod="openstack/nova-cell0-db-create-75hh6" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.582026 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5fb8423-7ae4-4515-8920-72d90de48d8e-operator-scripts\") pod \"nova-cell1-db-create-mrzb9\" (UID: \"b5fb8423-7ae4-4515-8920-72d90de48d8e\") " pod="openstack/nova-cell1-db-create-mrzb9" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.582078 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zprfx\" (UniqueName: \"kubernetes.io/projected/b5fb8423-7ae4-4515-8920-72d90de48d8e-kube-api-access-zprfx\") pod \"nova-cell1-db-create-mrzb9\" (UID: \"b5fb8423-7ae4-4515-8920-72d90de48d8e\") " pod="openstack/nova-cell1-db-create-mrzb9" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.582105 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67459792-2667-4acf-9ce1-6b715ce15a98-operator-scripts\") pod \"nova-api-c3ee-account-create-update-7nl28\" (UID: \"67459792-2667-4acf-9ce1-6b715ce15a98\") " pod="openstack/nova-api-c3ee-account-create-update-7nl28" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.582154 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g6mg\" (UniqueName: \"kubernetes.io/projected/67459792-2667-4acf-9ce1-6b715ce15a98-kube-api-access-5g6mg\") pod \"nova-api-c3ee-account-create-update-7nl28\" (UID: \"67459792-2667-4acf-9ce1-6b715ce15a98\") " pod="openstack/nova-api-c3ee-account-create-update-7nl28" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.586662 4955 scope.go:117] "RemoveContainer" containerID="5c218e0830897ee24d6a272a3af19e0b501794b096ae5959823a6705204402d8" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.587489 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39986812-66de-430e-a32f-95242971ddc6-logs" (OuterVolumeSpecName: "logs") pod "39986812-66de-430e-a32f-95242971ddc6" (UID: "39986812-66de-430e-a32f-95242971ddc6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.587740 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39986812-66de-430e-a32f-95242971ddc6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "39986812-66de-430e-a32f-95242971ddc6" (UID: "39986812-66de-430e-a32f-95242971ddc6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.592989 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5556dd9bdb-vvgs6"] Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.593469 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a17c62e-18bb-4a12-9865-8d38c0b7102f-operator-scripts\") pod \"nova-cell0-db-create-75hh6\" (UID: \"2a17c62e-18bb-4a12-9865-8d38c0b7102f\") " pod="openstack/nova-cell0-db-create-75hh6" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.603785 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-scripts" (OuterVolumeSpecName: "scripts") pod "39986812-66de-430e-a32f-95242971ddc6" (UID: "39986812-66de-430e-a32f-95242971ddc6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.609271 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf6qt\" (UniqueName: \"kubernetes.io/projected/2a17c62e-18bb-4a12-9865-8d38c0b7102f-kube-api-access-bf6qt\") pod \"nova-cell0-db-create-75hh6\" (UID: \"2a17c62e-18bb-4a12-9865-8d38c0b7102f\") " pod="openstack/nova-cell0-db-create-75hh6" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.612441 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39986812-66de-430e-a32f-95242971ddc6-kube-api-access-z9sh2" (OuterVolumeSpecName: "kube-api-access-z9sh2") pod "39986812-66de-430e-a32f-95242971ddc6" (UID: "39986812-66de-430e-a32f-95242971ddc6"). InnerVolumeSpecName "kube-api-access-z9sh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.620685 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "39986812-66de-430e-a32f-95242971ddc6" (UID: "39986812-66de-430e-a32f-95242971ddc6"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.630611 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-be3f-account-create-update-xvcsp"] Feb 02 13:19:57 crc kubenswrapper[4955]: E0202 13:19:57.631044 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39986812-66de-430e-a32f-95242971ddc6" containerName="glance-httpd" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.631056 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="39986812-66de-430e-a32f-95242971ddc6" containerName="glance-httpd" Feb 02 13:19:57 crc kubenswrapper[4955]: E0202 13:19:57.631087 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39986812-66de-430e-a32f-95242971ddc6" containerName="glance-log" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.631095 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="39986812-66de-430e-a32f-95242971ddc6" containerName="glance-log" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.631281 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="39986812-66de-430e-a32f-95242971ddc6" containerName="glance-log" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.631299 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="39986812-66de-430e-a32f-95242971ddc6" containerName="glance-httpd" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.631923 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-be3f-account-create-update-xvcsp" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.634957 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-be3f-account-create-update-xvcsp"] Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.644539 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.686712 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5fb8423-7ae4-4515-8920-72d90de48d8e-operator-scripts\") pod \"nova-cell1-db-create-mrzb9\" (UID: \"b5fb8423-7ae4-4515-8920-72d90de48d8e\") " pod="openstack/nova-cell1-db-create-mrzb9" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.686787 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zprfx\" (UniqueName: \"kubernetes.io/projected/b5fb8423-7ae4-4515-8920-72d90de48d8e-kube-api-access-zprfx\") pod \"nova-cell1-db-create-mrzb9\" (UID: \"b5fb8423-7ae4-4515-8920-72d90de48d8e\") " pod="openstack/nova-cell1-db-create-mrzb9" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.686814 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67459792-2667-4acf-9ce1-6b715ce15a98-operator-scripts\") pod \"nova-api-c3ee-account-create-update-7nl28\" (UID: \"67459792-2667-4acf-9ce1-6b715ce15a98\") " pod="openstack/nova-api-c3ee-account-create-update-7nl28" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.686872 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g6mg\" (UniqueName: \"kubernetes.io/projected/67459792-2667-4acf-9ce1-6b715ce15a98-kube-api-access-5g6mg\") pod \"nova-api-c3ee-account-create-update-7nl28\" (UID: \"67459792-2667-4acf-9ce1-6b715ce15a98\") " pod="openstack/nova-api-c3ee-account-create-update-7nl28" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.686946 4955 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/39986812-66de-430e-a32f-95242971ddc6-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.686968 4955 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.686978 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39986812-66de-430e-a32f-95242971ddc6-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.686987 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9sh2\" (UniqueName: \"kubernetes.io/projected/39986812-66de-430e-a32f-95242971ddc6-kube-api-access-z9sh2\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.686997 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.689667 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5fb8423-7ae4-4515-8920-72d90de48d8e-operator-scripts\") pod \"nova-cell1-db-create-mrzb9\" (UID: \"b5fb8423-7ae4-4515-8920-72d90de48d8e\") " pod="openstack/nova-cell1-db-create-mrzb9" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.690301 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67459792-2667-4acf-9ce1-6b715ce15a98-operator-scripts\") pod \"nova-api-c3ee-account-create-update-7nl28\" (UID: \"67459792-2667-4acf-9ce1-6b715ce15a98\") " pod="openstack/nova-api-c3ee-account-create-update-7nl28" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.709859 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39986812-66de-430e-a32f-95242971ddc6" (UID: "39986812-66de-430e-a32f-95242971ddc6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.731313 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zprfx\" (UniqueName: \"kubernetes.io/projected/b5fb8423-7ae4-4515-8920-72d90de48d8e-kube-api-access-zprfx\") pod \"nova-cell1-db-create-mrzb9\" (UID: \"b5fb8423-7ae4-4515-8920-72d90de48d8e\") " pod="openstack/nova-cell1-db-create-mrzb9" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.734924 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g6mg\" (UniqueName: \"kubernetes.io/projected/67459792-2667-4acf-9ce1-6b715ce15a98-kube-api-access-5g6mg\") pod \"nova-api-c3ee-account-create-update-7nl28\" (UID: \"67459792-2667-4acf-9ce1-6b715ce15a98\") " pod="openstack/nova-api-c3ee-account-create-update-7nl28" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.749540 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06f5162a-d775-4658-b6a3-10e528720bcf" path="/var/lib/kubelet/pods/06f5162a-d775-4658-b6a3-10e528720bcf/volumes" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.750511 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dee501b-e122-4870-b3bb-4096d3dcc975" path="/var/lib/kubelet/pods/1dee501b-e122-4870-b3bb-4096d3dcc975/volumes" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.751285 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbd94f38-a41d-4069-976f-fb347698edd6" path="/var/lib/kubelet/pods/dbd94f38-a41d-4069-976f-fb347698edd6/volumes" Feb 02 13:19:57 crc kubenswrapper[4955]: E0202 13:19:57.760231 4955 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06f5162a_d775_4658_b6a3_10e528720bcf.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1dee501b_e122_4870_b3bb_4096d3dcc975.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1dee501b_e122_4870_b3bb_4096d3dcc975.slice/crio-dcf6a7ffd22950608c2af4eee9a8954041c4968fc1d752ec7ee0fbe11cbda077\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06f5162a_d775_4658_b6a3_10e528720bcf.slice/crio-c2b2f36ec693d26f0e87bc9e4e7356b2e071acb825520796b36ede82b66e0305\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbd94f38_a41d_4069_976f_fb347698edd6.slice/crio-dee0009f92f6c4015a2d7b481e9d713ae051de32b6b6e68874a77018cc5cced1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02c53bcf_5b6a_4bc5_b677_b01a827904ff.slice/crio-conmon-d42086a3134c4f15c2cb0867708e12c9828c701f75976d7a9d696c97e0434fae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d3ca271_8968_4a48_a1a5_be53a4038119.slice/crio-conmon-f5dfba988f2e284ee4ed01ac1576ab7a510fe8949503b30f3d4961d1d8ea0464.scope\": RecentStats: unable to find data in memory cache]" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.763376 4955 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.772016 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.789731 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8faf7286-4893-432d-bbe7-a431158357f9-operator-scripts\") pod \"nova-cell0-be3f-account-create-update-xvcsp\" (UID: \"8faf7286-4893-432d-bbe7-a431158357f9\") " pod="openstack/nova-cell0-be3f-account-create-update-xvcsp" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.789817 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpqth\" (UniqueName: \"kubernetes.io/projected/8faf7286-4893-432d-bbe7-a431158357f9-kube-api-access-wpqth\") pod \"nova-cell0-be3f-account-create-update-xvcsp\" (UID: \"8faf7286-4893-432d-bbe7-a431158357f9\") " pod="openstack/nova-cell0-be3f-account-create-update-xvcsp" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.789920 4955 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.789937 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.793273 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q76t2" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.837287 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "39986812-66de-430e-a32f-95242971ddc6" (UID: "39986812-66de-430e-a32f-95242971ddc6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.849822 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-75hh6" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.890818 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-config-data" (OuterVolumeSpecName: "config-data") pod "39986812-66de-430e-a32f-95242971ddc6" (UID: "39986812-66de-430e-a32f-95242971ddc6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.891845 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8faf7286-4893-432d-bbe7-a431158357f9-operator-scripts\") pod \"nova-cell0-be3f-account-create-update-xvcsp\" (UID: \"8faf7286-4893-432d-bbe7-a431158357f9\") " pod="openstack/nova-cell0-be3f-account-create-update-xvcsp" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.891937 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpqth\" (UniqueName: \"kubernetes.io/projected/8faf7286-4893-432d-bbe7-a431158357f9-kube-api-access-wpqth\") pod \"nova-cell0-be3f-account-create-update-xvcsp\" (UID: \"8faf7286-4893-432d-bbe7-a431158357f9\") " pod="openstack/nova-cell0-be3f-account-create-update-xvcsp" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.894924 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8faf7286-4893-432d-bbe7-a431158357f9-operator-scripts\") pod \"nova-cell0-be3f-account-create-update-xvcsp\" (UID: \"8faf7286-4893-432d-bbe7-a431158357f9\") " pod="openstack/nova-cell0-be3f-account-create-update-xvcsp" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.901085 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.901121 4955 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39986812-66de-430e-a32f-95242971ddc6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.916384 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpqth\" (UniqueName: \"kubernetes.io/projected/8faf7286-4893-432d-bbe7-a431158357f9-kube-api-access-wpqth\") pod \"nova-cell0-be3f-account-create-update-xvcsp\" (UID: \"8faf7286-4893-432d-bbe7-a431158357f9\") " pod="openstack/nova-cell0-be3f-account-create-update-xvcsp" Feb 02 13:19:57 crc kubenswrapper[4955]: I0202 13:19:57.998873 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mrzb9" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.007335 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-afac-account-create-update-kpmqz"] Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.011158 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-afac-account-create-update-kpmqz" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.011585 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-afac-account-create-update-kpmqz"] Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.011844 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-9585f6f46-wl58s"] Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.014428 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.017550 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-8dd674b7b-kd8rg"] Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.019008 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-9585f6f46-wl58s" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.021430 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-9585f6f46-wl58s"] Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.021488 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-8dd674b7b-kd8rg"] Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.021505 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5b4597c47b-whqbv"] Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.021786 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8dd674b7b-kd8rg" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.024221 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5b4597c47b-whqbv"] Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.024372 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5b4597c47b-whqbv" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.048053 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c3ee-account-create-update-7nl28" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.074196 4955 scope.go:117] "RemoveContainer" containerID="6fb377675d21b30988b781602e0ff194c6d3a5ec8482f3ed3372259fb696f250" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.082937 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-be3f-account-create-update-xvcsp" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.118334 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5634670e-87b2-4c94-a877-853ad21f32b2-combined-ca-bundle\") pod \"heat-cfnapi-5b4597c47b-whqbv\" (UID: \"5634670e-87b2-4c94-a877-853ad21f32b2\") " pod="openstack/heat-cfnapi-5b4597c47b-whqbv" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.118436 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5634670e-87b2-4c94-a877-853ad21f32b2-config-data-custom\") pod \"heat-cfnapi-5b4597c47b-whqbv\" (UID: \"5634670e-87b2-4c94-a877-853ad21f32b2\") " pod="openstack/heat-cfnapi-5b4597c47b-whqbv" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.118464 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e3a0ef-22c1-4f58-bd10-175541f41d88-config-data\") pod \"heat-engine-9585f6f46-wl58s\" (UID: \"f8e3a0ef-22c1-4f58-bd10-175541f41d88\") " pod="openstack/heat-engine-9585f6f46-wl58s" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.118495 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5634670e-87b2-4c94-a877-853ad21f32b2-config-data\") pod \"heat-cfnapi-5b4597c47b-whqbv\" (UID: \"5634670e-87b2-4c94-a877-853ad21f32b2\") " pod="openstack/heat-cfnapi-5b4597c47b-whqbv" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.118536 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8e3a0ef-22c1-4f58-bd10-175541f41d88-config-data-custom\") pod \"heat-engine-9585f6f46-wl58s\" (UID: \"f8e3a0ef-22c1-4f58-bd10-175541f41d88\") " pod="openstack/heat-engine-9585f6f46-wl58s" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.118693 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e3a0ef-22c1-4f58-bd10-175541f41d88-combined-ca-bundle\") pod \"heat-engine-9585f6f46-wl58s\" (UID: \"f8e3a0ef-22c1-4f58-bd10-175541f41d88\") " pod="openstack/heat-engine-9585f6f46-wl58s" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.118735 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m7rx\" (UniqueName: \"kubernetes.io/projected/f8e3a0ef-22c1-4f58-bd10-175541f41d88-kube-api-access-8m7rx\") pod \"heat-engine-9585f6f46-wl58s\" (UID: \"f8e3a0ef-22c1-4f58-bd10-175541f41d88\") " pod="openstack/heat-engine-9585f6f46-wl58s" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.118755 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/247c58e7-931d-4356-a900-da1c877548cd-combined-ca-bundle\") pod \"heat-api-8dd674b7b-kd8rg\" (UID: \"247c58e7-931d-4356-a900-da1c877548cd\") " pod="openstack/heat-api-8dd674b7b-kd8rg" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.118796 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf5jl\" (UniqueName: \"kubernetes.io/projected/247c58e7-931d-4356-a900-da1c877548cd-kube-api-access-gf5jl\") pod \"heat-api-8dd674b7b-kd8rg\" (UID: \"247c58e7-931d-4356-a900-da1c877548cd\") " pod="openstack/heat-api-8dd674b7b-kd8rg" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.118840 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/247c58e7-931d-4356-a900-da1c877548cd-config-data\") pod \"heat-api-8dd674b7b-kd8rg\" (UID: \"247c58e7-931d-4356-a900-da1c877548cd\") " pod="openstack/heat-api-8dd674b7b-kd8rg" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.118865 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4rvr\" (UniqueName: \"kubernetes.io/projected/c6eff7b8-c700-48bd-b71b-0343fca61cc4-kube-api-access-t4rvr\") pod \"nova-cell1-afac-account-create-update-kpmqz\" (UID: \"c6eff7b8-c700-48bd-b71b-0343fca61cc4\") " pod="openstack/nova-cell1-afac-account-create-update-kpmqz" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.118900 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6eff7b8-c700-48bd-b71b-0343fca61cc4-operator-scripts\") pod \"nova-cell1-afac-account-create-update-kpmqz\" (UID: \"c6eff7b8-c700-48bd-b71b-0343fca61cc4\") " pod="openstack/nova-cell1-afac-account-create-update-kpmqz" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.118922 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/247c58e7-931d-4356-a900-da1c877548cd-config-data-custom\") pod \"heat-api-8dd674b7b-kd8rg\" (UID: \"247c58e7-931d-4356-a900-da1c877548cd\") " pod="openstack/heat-api-8dd674b7b-kd8rg" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.118948 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srrgm\" (UniqueName: \"kubernetes.io/projected/5634670e-87b2-4c94-a877-853ad21f32b2-kube-api-access-srrgm\") pod \"heat-cfnapi-5b4597c47b-whqbv\" (UID: \"5634670e-87b2-4c94-a877-853ad21f32b2\") " pod="openstack/heat-cfnapi-5b4597c47b-whqbv" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.123921 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.219892 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d3ca271-8968-4a48-a1a5-be53a4038119-logs\") pod \"6d3ca271-8968-4a48-a1a5-be53a4038119\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.219945 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"6d3ca271-8968-4a48-a1a5-be53a4038119\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.219972 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-internal-tls-certs\") pod \"6d3ca271-8968-4a48-a1a5-be53a4038119\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.220027 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d3ca271-8968-4a48-a1a5-be53a4038119-httpd-run\") pod \"6d3ca271-8968-4a48-a1a5-be53a4038119\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.220049 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-scripts\") pod \"6d3ca271-8968-4a48-a1a5-be53a4038119\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.220072 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-config-data\") pod \"6d3ca271-8968-4a48-a1a5-be53a4038119\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.220106 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckp7g\" (UniqueName: \"kubernetes.io/projected/6d3ca271-8968-4a48-a1a5-be53a4038119-kube-api-access-ckp7g\") pod \"6d3ca271-8968-4a48-a1a5-be53a4038119\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.220148 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-combined-ca-bundle\") pod \"6d3ca271-8968-4a48-a1a5-be53a4038119\" (UID: \"6d3ca271-8968-4a48-a1a5-be53a4038119\") " Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.220350 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/247c58e7-931d-4356-a900-da1c877548cd-config-data-custom\") pod \"heat-api-8dd674b7b-kd8rg\" (UID: \"247c58e7-931d-4356-a900-da1c877548cd\") " pod="openstack/heat-api-8dd674b7b-kd8rg" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.220390 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srrgm\" (UniqueName: \"kubernetes.io/projected/5634670e-87b2-4c94-a877-853ad21f32b2-kube-api-access-srrgm\") pod \"heat-cfnapi-5b4597c47b-whqbv\" (UID: \"5634670e-87b2-4c94-a877-853ad21f32b2\") " pod="openstack/heat-cfnapi-5b4597c47b-whqbv" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.220419 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5634670e-87b2-4c94-a877-853ad21f32b2-combined-ca-bundle\") pod \"heat-cfnapi-5b4597c47b-whqbv\" (UID: \"5634670e-87b2-4c94-a877-853ad21f32b2\") " pod="openstack/heat-cfnapi-5b4597c47b-whqbv" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.220446 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5634670e-87b2-4c94-a877-853ad21f32b2-config-data-custom\") pod \"heat-cfnapi-5b4597c47b-whqbv\" (UID: \"5634670e-87b2-4c94-a877-853ad21f32b2\") " pod="openstack/heat-cfnapi-5b4597c47b-whqbv" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.220469 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e3a0ef-22c1-4f58-bd10-175541f41d88-config-data\") pod \"heat-engine-9585f6f46-wl58s\" (UID: \"f8e3a0ef-22c1-4f58-bd10-175541f41d88\") " pod="openstack/heat-engine-9585f6f46-wl58s" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.220490 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5634670e-87b2-4c94-a877-853ad21f32b2-config-data\") pod \"heat-cfnapi-5b4597c47b-whqbv\" (UID: \"5634670e-87b2-4c94-a877-853ad21f32b2\") " pod="openstack/heat-cfnapi-5b4597c47b-whqbv" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.220507 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8e3a0ef-22c1-4f58-bd10-175541f41d88-config-data-custom\") pod \"heat-engine-9585f6f46-wl58s\" (UID: \"f8e3a0ef-22c1-4f58-bd10-175541f41d88\") " pod="openstack/heat-engine-9585f6f46-wl58s" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.220534 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e3a0ef-22c1-4f58-bd10-175541f41d88-combined-ca-bundle\") pod \"heat-engine-9585f6f46-wl58s\" (UID: \"f8e3a0ef-22c1-4f58-bd10-175541f41d88\") " pod="openstack/heat-engine-9585f6f46-wl58s" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.220597 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m7rx\" (UniqueName: \"kubernetes.io/projected/f8e3a0ef-22c1-4f58-bd10-175541f41d88-kube-api-access-8m7rx\") pod \"heat-engine-9585f6f46-wl58s\" (UID: \"f8e3a0ef-22c1-4f58-bd10-175541f41d88\") " pod="openstack/heat-engine-9585f6f46-wl58s" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.220616 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/247c58e7-931d-4356-a900-da1c877548cd-combined-ca-bundle\") pod \"heat-api-8dd674b7b-kd8rg\" (UID: \"247c58e7-931d-4356-a900-da1c877548cd\") " pod="openstack/heat-api-8dd674b7b-kd8rg" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.220655 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf5jl\" (UniqueName: \"kubernetes.io/projected/247c58e7-931d-4356-a900-da1c877548cd-kube-api-access-gf5jl\") pod \"heat-api-8dd674b7b-kd8rg\" (UID: \"247c58e7-931d-4356-a900-da1c877548cd\") " pod="openstack/heat-api-8dd674b7b-kd8rg" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.220721 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/247c58e7-931d-4356-a900-da1c877548cd-config-data\") pod \"heat-api-8dd674b7b-kd8rg\" (UID: \"247c58e7-931d-4356-a900-da1c877548cd\") " pod="openstack/heat-api-8dd674b7b-kd8rg" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.220746 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4rvr\" (UniqueName: \"kubernetes.io/projected/c6eff7b8-c700-48bd-b71b-0343fca61cc4-kube-api-access-t4rvr\") pod \"nova-cell1-afac-account-create-update-kpmqz\" (UID: \"c6eff7b8-c700-48bd-b71b-0343fca61cc4\") " pod="openstack/nova-cell1-afac-account-create-update-kpmqz" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.220777 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6eff7b8-c700-48bd-b71b-0343fca61cc4-operator-scripts\") pod \"nova-cell1-afac-account-create-update-kpmqz\" (UID: \"c6eff7b8-c700-48bd-b71b-0343fca61cc4\") " pod="openstack/nova-cell1-afac-account-create-update-kpmqz" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.221393 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6eff7b8-c700-48bd-b71b-0343fca61cc4-operator-scripts\") pod \"nova-cell1-afac-account-create-update-kpmqz\" (UID: \"c6eff7b8-c700-48bd-b71b-0343fca61cc4\") " pod="openstack/nova-cell1-afac-account-create-update-kpmqz" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.226517 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d3ca271-8968-4a48-a1a5-be53a4038119-logs" (OuterVolumeSpecName: "logs") pod "6d3ca271-8968-4a48-a1a5-be53a4038119" (UID: "6d3ca271-8968-4a48-a1a5-be53a4038119"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.230982 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5634670e-87b2-4c94-a877-853ad21f32b2-combined-ca-bundle\") pod \"heat-cfnapi-5b4597c47b-whqbv\" (UID: \"5634670e-87b2-4c94-a877-853ad21f32b2\") " pod="openstack/heat-cfnapi-5b4597c47b-whqbv" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.233280 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/247c58e7-931d-4356-a900-da1c877548cd-combined-ca-bundle\") pod \"heat-api-8dd674b7b-kd8rg\" (UID: \"247c58e7-931d-4356-a900-da1c877548cd\") " pod="openstack/heat-api-8dd674b7b-kd8rg" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.234673 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/247c58e7-931d-4356-a900-da1c877548cd-config-data\") pod \"heat-api-8dd674b7b-kd8rg\" (UID: \"247c58e7-931d-4356-a900-da1c877548cd\") " pod="openstack/heat-api-8dd674b7b-kd8rg" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.236876 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d3ca271-8968-4a48-a1a5-be53a4038119-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6d3ca271-8968-4a48-a1a5-be53a4038119" (UID: "6d3ca271-8968-4a48-a1a5-be53a4038119"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.240076 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8e3a0ef-22c1-4f58-bd10-175541f41d88-config-data-custom\") pod \"heat-engine-9585f6f46-wl58s\" (UID: \"f8e3a0ef-22c1-4f58-bd10-175541f41d88\") " pod="openstack/heat-engine-9585f6f46-wl58s" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.240681 4955 scope.go:117] "RemoveContainer" containerID="3dd62fe38f541578e6715632c5c28cffee451192a6e233d2e840bf1af503efa0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.242307 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5634670e-87b2-4c94-a877-853ad21f32b2-config-data-custom\") pod \"heat-cfnapi-5b4597c47b-whqbv\" (UID: \"5634670e-87b2-4c94-a877-853ad21f32b2\") " pod="openstack/heat-cfnapi-5b4597c47b-whqbv" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.243459 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d3ca271-8968-4a48-a1a5-be53a4038119-kube-api-access-ckp7g" (OuterVolumeSpecName: "kube-api-access-ckp7g") pod "6d3ca271-8968-4a48-a1a5-be53a4038119" (UID: "6d3ca271-8968-4a48-a1a5-be53a4038119"). InnerVolumeSpecName "kube-api-access-ckp7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.246009 4955 generic.go:334] "Generic (PLEG): container finished" podID="02c53bcf-5b6a-4bc5-b677-b01a827904ff" containerID="d42086a3134c4f15c2cb0867708e12c9828c701f75976d7a9d696c97e0434fae" exitCode=0 Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.246101 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" event={"ID":"02c53bcf-5b6a-4bc5-b677-b01a827904ff","Type":"ContainerDied","Data":"d42086a3134c4f15c2cb0867708e12c9828c701f75976d7a9d696c97e0434fae"} Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.251280 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e3a0ef-22c1-4f58-bd10-175541f41d88-combined-ca-bundle\") pod \"heat-engine-9585f6f46-wl58s\" (UID: \"f8e3a0ef-22c1-4f58-bd10-175541f41d88\") " pod="openstack/heat-engine-9585f6f46-wl58s" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.260762 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5634670e-87b2-4c94-a877-853ad21f32b2-config-data\") pod \"heat-cfnapi-5b4597c47b-whqbv\" (UID: \"5634670e-87b2-4c94-a877-853ad21f32b2\") " pod="openstack/heat-cfnapi-5b4597c47b-whqbv" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.262287 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m7rx\" (UniqueName: \"kubernetes.io/projected/f8e3a0ef-22c1-4f58-bd10-175541f41d88-kube-api-access-8m7rx\") pod \"heat-engine-9585f6f46-wl58s\" (UID: \"f8e3a0ef-22c1-4f58-bd10-175541f41d88\") " pod="openstack/heat-engine-9585f6f46-wl58s" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.263270 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "6d3ca271-8968-4a48-a1a5-be53a4038119" (UID: "6d3ca271-8968-4a48-a1a5-be53a4038119"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.265977 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/247c58e7-931d-4356-a900-da1c877548cd-config-data-custom\") pod \"heat-api-8dd674b7b-kd8rg\" (UID: \"247c58e7-931d-4356-a900-da1c877548cd\") " pod="openstack/heat-api-8dd674b7b-kd8rg" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.266022 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-79fb55657c-85sjk" event={"ID":"91116c53-5321-4170-9ec0-1c0588b81355","Type":"ContainerStarted","Data":"8b374e5ce25894380e342930d209baa08a88d5a84875d57e4ba34e9171015980"} Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.266059 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-79fb55657c-85sjk" event={"ID":"91116c53-5321-4170-9ec0-1c0588b81355","Type":"ContainerStarted","Data":"cda8f399790f1a3a365369a82818ec184736d49f1d8cbe64d82a2325aa228533"} Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.266833 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4rvr\" (UniqueName: \"kubernetes.io/projected/c6eff7b8-c700-48bd-b71b-0343fca61cc4-kube-api-access-t4rvr\") pod \"nova-cell1-afac-account-create-update-kpmqz\" (UID: \"c6eff7b8-c700-48bd-b71b-0343fca61cc4\") " pod="openstack/nova-cell1-afac-account-create-update-kpmqz" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.267717 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-scripts" (OuterVolumeSpecName: "scripts") pod "6d3ca271-8968-4a48-a1a5-be53a4038119" (UID: "6d3ca271-8968-4a48-a1a5-be53a4038119"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.272836 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e3a0ef-22c1-4f58-bd10-175541f41d88-config-data\") pod \"heat-engine-9585f6f46-wl58s\" (UID: \"f8e3a0ef-22c1-4f58-bd10-175541f41d88\") " pod="openstack/heat-engine-9585f6f46-wl58s" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.279415 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf5jl\" (UniqueName: \"kubernetes.io/projected/247c58e7-931d-4356-a900-da1c877548cd-kube-api-access-gf5jl\") pod \"heat-api-8dd674b7b-kd8rg\" (UID: \"247c58e7-931d-4356-a900-da1c877548cd\") " pod="openstack/heat-api-8dd674b7b-kd8rg" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.289181 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"39986812-66de-430e-a32f-95242971ddc6","Type":"ContainerDied","Data":"685f5307fe47fb5d48d4280ab312e6ecb49751dd0a106ab42b266e29d40038c1"} Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.289306 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.292885 4955 generic.go:334] "Generic (PLEG): container finished" podID="6d3ca271-8968-4a48-a1a5-be53a4038119" containerID="f5dfba988f2e284ee4ed01ac1576ab7a510fe8949503b30f3d4961d1d8ea0464" exitCode=0 Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.292947 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6d3ca271-8968-4a48-a1a5-be53a4038119","Type":"ContainerDied","Data":"f5dfba988f2e284ee4ed01ac1576ab7a510fe8949503b30f3d4961d1d8ea0464"} Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.292976 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6d3ca271-8968-4a48-a1a5-be53a4038119","Type":"ContainerDied","Data":"5787e93b03e11252c5c913a55f6d4366b57671de698acbbf6646fca8c470dab0"} Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.293029 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.294891 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6d8c5fddf-xssbs" event={"ID":"e4433372-00c8-4e01-8813-4fed0ea54158","Type":"ContainerStarted","Data":"deff51c1dfe6f36833195bc2ecc18691189dcd83c0edf560baaa7a9aff6f58ab"} Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.295361 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6d8c5fddf-xssbs" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.301214 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srrgm\" (UniqueName: \"kubernetes.io/projected/5634670e-87b2-4c94-a877-853ad21f32b2-kube-api-access-srrgm\") pod \"heat-cfnapi-5b4597c47b-whqbv\" (UID: \"5634670e-87b2-4c94-a877-853ad21f32b2\") " pod="openstack/heat-cfnapi-5b4597c47b-whqbv" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.332388 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d3ca271-8968-4a48-a1a5-be53a4038119-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.332435 4955 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.332450 4955 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d3ca271-8968-4a48-a1a5-be53a4038119-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.332462 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.332474 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckp7g\" (UniqueName: \"kubernetes.io/projected/6d3ca271-8968-4a48-a1a5-be53a4038119-kube-api-access-ckp7g\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.336200 4955 scope.go:117] "RemoveContainer" containerID="9720d1c591b1dec49c82c853629bc29ea23ffa86cde4c173ef4428da55747d38" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.347514 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6d8c5fddf-xssbs" podStartSLOduration=7.347493985 podStartE2EDuration="7.347493985s" podCreationTimestamp="2026-02-02 13:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:19:58.323971759 +0000 UTC m=+1049.236308209" watchObservedRunningTime="2026-02-02 13:19:58.347493985 +0000 UTC m=+1049.259830435" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.379618 4955 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.402303 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d3ca271-8968-4a48-a1a5-be53a4038119" (UID: "6d3ca271-8968-4a48-a1a5-be53a4038119"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.428512 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.434219 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.434290 4955 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.436129 4955 scope.go:117] "RemoveContainer" containerID="965cdc793093de7895abb18c5143614f7bc787de6e41c8bb1d77a7a85cd49c52" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.440672 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.452833 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6d3ca271-8968-4a48-a1a5-be53a4038119" (UID: "6d3ca271-8968-4a48-a1a5-be53a4038119"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.457247 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-config-data" (OuterVolumeSpecName: "config-data") pod "6d3ca271-8968-4a48-a1a5-be53a4038119" (UID: "6d3ca271-8968-4a48-a1a5-be53a4038119"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.497774 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:19:58 crc kubenswrapper[4955]: E0202 13:19:58.498176 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3ca271-8968-4a48-a1a5-be53a4038119" containerName="glance-log" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.498188 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3ca271-8968-4a48-a1a5-be53a4038119" containerName="glance-log" Feb 02 13:19:58 crc kubenswrapper[4955]: E0202 13:19:58.498204 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3ca271-8968-4a48-a1a5-be53a4038119" containerName="glance-httpd" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.498210 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3ca271-8968-4a48-a1a5-be53a4038119" containerName="glance-httpd" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.498392 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3ca271-8968-4a48-a1a5-be53a4038119" containerName="glance-log" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.498410 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3ca271-8968-4a48-a1a5-be53a4038119" containerName="glance-httpd" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.499743 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.502409 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.503599 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.511030 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-afac-account-create-update-kpmqz" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.513261 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.523432 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-9585f6f46-wl58s" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.537000 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.537032 4955 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d3ca271-8968-4a48-a1a5-be53a4038119-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.549958 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8dd674b7b-kd8rg" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.552726 4955 scope.go:117] "RemoveContainer" containerID="4dd195906a79b457df40eec250a447bc000ffedbad646e88caba8037efa32a1d" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.574750 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5b4597c47b-whqbv" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.641100 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342df619-1ebd-498c-9199-5c48a35fb732-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.641175 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qwlf\" (UniqueName: \"kubernetes.io/projected/342df619-1ebd-498c-9199-5c48a35fb732-kube-api-access-7qwlf\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.641205 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/342df619-1ebd-498c-9199-5c48a35fb732-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.641232 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.641263 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/342df619-1ebd-498c-9199-5c48a35fb732-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.641311 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/342df619-1ebd-498c-9199-5c48a35fb732-logs\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.641333 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/342df619-1ebd-498c-9199-5c48a35fb732-config-data\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.641413 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/342df619-1ebd-498c-9199-5c48a35fb732-scripts\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.659974 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.681625 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.728698 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.730623 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.739247 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.743867 4955 scope.go:117] "RemoveContainer" containerID="f5dfba988f2e284ee4ed01ac1576ab7a510fe8949503b30f3d4961d1d8ea0464" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.744782 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.745413 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.745747 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342df619-1ebd-498c-9199-5c48a35fb732-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.745802 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qwlf\" (UniqueName: \"kubernetes.io/projected/342df619-1ebd-498c-9199-5c48a35fb732-kube-api-access-7qwlf\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.745827 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/342df619-1ebd-498c-9199-5c48a35fb732-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.745848 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.745881 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/342df619-1ebd-498c-9199-5c48a35fb732-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.745925 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/342df619-1ebd-498c-9199-5c48a35fb732-logs\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.745946 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/342df619-1ebd-498c-9199-5c48a35fb732-config-data\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.746012 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/342df619-1ebd-498c-9199-5c48a35fb732-scripts\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.748693 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/342df619-1ebd-498c-9199-5c48a35fb732-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.750700 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/342df619-1ebd-498c-9199-5c48a35fb732-logs\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.757866 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/342df619-1ebd-498c-9199-5c48a35fb732-config-data\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.764330 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/342df619-1ebd-498c-9199-5c48a35fb732-scripts\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.767957 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.781352 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342df619-1ebd-498c-9199-5c48a35fb732-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.792272 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/342df619-1ebd-498c-9199-5c48a35fb732-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.815673 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.835539 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qwlf\" (UniqueName: \"kubernetes.io/projected/342df619-1ebd-498c-9199-5c48a35fb732-kube-api-access-7qwlf\") pod \"glance-default-external-api-0\" (UID: \"342df619-1ebd-498c-9199-5c48a35fb732\") " pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.846000 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.847240 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22f6239-6425-47b5-9e00-664ff50c02dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.847267 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f22f6239-6425-47b5-9e00-664ff50c02dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.847286 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f22f6239-6425-47b5-9e00-664ff50c02dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.847347 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f22f6239-6425-47b5-9e00-664ff50c02dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.847370 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.847401 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f22f6239-6425-47b5-9e00-664ff50c02dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.847434 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22f6239-6425-47b5-9e00-664ff50c02dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.847453 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x666z\" (UniqueName: \"kubernetes.io/projected/f22f6239-6425-47b5-9e00-664ff50c02dc-kube-api-access-x666z\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.960114 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22f6239-6425-47b5-9e00-664ff50c02dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.960156 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f22f6239-6425-47b5-9e00-664ff50c02dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.960173 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f22f6239-6425-47b5-9e00-664ff50c02dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.960226 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f22f6239-6425-47b5-9e00-664ff50c02dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.960253 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.960283 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f22f6239-6425-47b5-9e00-664ff50c02dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.960316 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22f6239-6425-47b5-9e00-664ff50c02dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.960340 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x666z\" (UniqueName: \"kubernetes.io/projected/f22f6239-6425-47b5-9e00-664ff50c02dc-kube-api-access-x666z\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.961138 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.961253 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f22f6239-6425-47b5-9e00-664ff50c02dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.976094 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22f6239-6425-47b5-9e00-664ff50c02dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.976354 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f22f6239-6425-47b5-9e00-664ff50c02dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.982312 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f22f6239-6425-47b5-9e00-664ff50c02dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:58 crc kubenswrapper[4955]: I0202 13:19:58.986829 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f22f6239-6425-47b5-9e00-664ff50c02dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.005811 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22f6239-6425-47b5-9e00-664ff50c02dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.014469 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x666z\" (UniqueName: \"kubernetes.io/projected/f22f6239-6425-47b5-9e00-664ff50c02dc-kube-api-access-x666z\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.058385 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.090109 4955 scope.go:117] "RemoveContainer" containerID="0391500e009eaf107990fd8329b2872b76890f0bce585e0b2f28ad0f937f6de5" Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.104685 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-q76t2"] Feb 02 13:19:59 crc kubenswrapper[4955]: W0202 13:19:59.109896 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod587dcc9e_137d_4dd7_9a87_024e6d09b1e1.slice/crio-4bc7c2809b10893b72efa770d01208306e8fc54261efdd8dac09e8ba90c16b41 WatchSource:0}: Error finding container 4bc7c2809b10893b72efa770d01208306e8fc54261efdd8dac09e8ba90c16b41: Status 404 returned error can't find the container with id 4bc7c2809b10893b72efa770d01208306e8fc54261efdd8dac09e8ba90c16b41 Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.146984 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"f22f6239-6425-47b5-9e00-664ff50c02dc\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.206762 4955 scope.go:117] "RemoveContainer" containerID="f5dfba988f2e284ee4ed01ac1576ab7a510fe8949503b30f3d4961d1d8ea0464" Feb 02 13:19:59 crc kubenswrapper[4955]: E0202 13:19:59.214382 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5dfba988f2e284ee4ed01ac1576ab7a510fe8949503b30f3d4961d1d8ea0464\": container with ID starting with f5dfba988f2e284ee4ed01ac1576ab7a510fe8949503b30f3d4961d1d8ea0464 not found: ID does not exist" containerID="f5dfba988f2e284ee4ed01ac1576ab7a510fe8949503b30f3d4961d1d8ea0464" Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.214433 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5dfba988f2e284ee4ed01ac1576ab7a510fe8949503b30f3d4961d1d8ea0464"} err="failed to get container status \"f5dfba988f2e284ee4ed01ac1576ab7a510fe8949503b30f3d4961d1d8ea0464\": rpc error: code = NotFound desc = could not find container \"f5dfba988f2e284ee4ed01ac1576ab7a510fe8949503b30f3d4961d1d8ea0464\": container with ID starting with f5dfba988f2e284ee4ed01ac1576ab7a510fe8949503b30f3d4961d1d8ea0464 not found: ID does not exist" Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.214469 4955 scope.go:117] "RemoveContainer" containerID="0391500e009eaf107990fd8329b2872b76890f0bce585e0b2f28ad0f937f6de5" Feb 02 13:19:59 crc kubenswrapper[4955]: E0202 13:19:59.215015 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0391500e009eaf107990fd8329b2872b76890f0bce585e0b2f28ad0f937f6de5\": container with ID starting with 0391500e009eaf107990fd8329b2872b76890f0bce585e0b2f28ad0f937f6de5 not found: ID does not exist" containerID="0391500e009eaf107990fd8329b2872b76890f0bce585e0b2f28ad0f937f6de5" Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.215037 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0391500e009eaf107990fd8329b2872b76890f0bce585e0b2f28ad0f937f6de5"} err="failed to get container status \"0391500e009eaf107990fd8329b2872b76890f0bce585e0b2f28ad0f937f6de5\": rpc error: code = NotFound desc = could not find container \"0391500e009eaf107990fd8329b2872b76890f0bce585e0b2f28ad0f937f6de5\": container with ID starting with 0391500e009eaf107990fd8329b2872b76890f0bce585e0b2f28ad0f937f6de5 not found: ID does not exist" Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.316687 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q76t2" event={"ID":"98ea679e-a44a-4bd0-867e-044542b96bbb","Type":"ContainerStarted","Data":"bba1f6e1a4f2bde0c184195a2111b49457d3e31f630426da1b373afcc17a34aa"} Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.328369 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.347873 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" event={"ID":"02c53bcf-5b6a-4bc5-b677-b01a827904ff","Type":"ContainerStarted","Data":"da73f062eb3f0dd87828ede93621219124e8d98bdf9bab1961dcc5e4f7e3e41f"} Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.347967 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.361962 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"587dcc9e-137d-4dd7-9a87-024e6d09b1e1","Type":"ContainerStarted","Data":"4bc7c2809b10893b72efa770d01208306e8fc54261efdd8dac09e8ba90c16b41"} Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.367911 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-79fb55657c-85sjk" event={"ID":"91116c53-5321-4170-9ec0-1c0588b81355","Type":"ContainerStarted","Data":"8751d1bfecec627bbaf0b41848da8b7114836a3800f8f130f1f707c41d69832d"} Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.368282 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.368646 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.405553 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mrzb9"] Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.410107 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" podStartSLOduration=8.410090277 podStartE2EDuration="8.410090277s" podCreationTimestamp="2026-02-02 13:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:19:59.391085149 +0000 UTC m=+1050.303421599" watchObservedRunningTime="2026-02-02 13:19:59.410090277 +0000 UTC m=+1050.322426727" Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.443384 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-79fb55657c-85sjk" podStartSLOduration=11.443363219 podStartE2EDuration="11.443363219s" podCreationTimestamp="2026-02-02 13:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:19:59.441162347 +0000 UTC m=+1050.353498807" watchObservedRunningTime="2026-02-02 13:19:59.443363219 +0000 UTC m=+1050.355699669" Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.506282 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.521152 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c3ee-account-create-update-7nl28"] Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.535241 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-75hh6"] Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.572529 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-be3f-account-create-update-xvcsp"] Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.750336 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39986812-66de-430e-a32f-95242971ddc6" path="/var/lib/kubelet/pods/39986812-66de-430e-a32f-95242971ddc6/volumes" Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.757376 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d3ca271-8968-4a48-a1a5-be53a4038119" path="/var/lib/kubelet/pods/6d3ca271-8968-4a48-a1a5-be53a4038119/volumes" Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.930980 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-9585f6f46-wl58s"] Feb 02 13:19:59 crc kubenswrapper[4955]: W0202 13:19:59.932938 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod247c58e7_931d_4356_a900_da1c877548cd.slice/crio-ec95f2640256f585f08389cfd2a0b516bdec0d2135fedc2c9e62767746afdb54 WatchSource:0}: Error finding container ec95f2640256f585f08389cfd2a0b516bdec0d2135fedc2c9e62767746afdb54: Status 404 returned error can't find the container with id ec95f2640256f585f08389cfd2a0b516bdec0d2135fedc2c9e62767746afdb54 Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.938441 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-8dd674b7b-kd8rg"] Feb 02 13:19:59 crc kubenswrapper[4955]: W0202 13:19:59.940725 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8e3a0ef_22c1_4f58_bd10_175541f41d88.slice/crio-7b637d87d43d27e92038e65f80d77ef2842cd03923d9798b44be6157e02d23ca WatchSource:0}: Error finding container 7b637d87d43d27e92038e65f80d77ef2842cd03923d9798b44be6157e02d23ca: Status 404 returned error can't find the container with id 7b637d87d43d27e92038e65f80d77ef2842cd03923d9798b44be6157e02d23ca Feb 02 13:19:59 crc kubenswrapper[4955]: W0202 13:19:59.942403 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6eff7b8_c700_48bd_b71b_0343fca61cc4.slice/crio-9f6f14e1702e4e1cf536a7344879819993d65181b7ae31d4ffae4d3947171e62 WatchSource:0}: Error finding container 9f6f14e1702e4e1cf536a7344879819993d65181b7ae31d4ffae4d3947171e62: Status 404 returned error can't find the container with id 9f6f14e1702e4e1cf536a7344879819993d65181b7ae31d4ffae4d3947171e62 Feb 02 13:19:59 crc kubenswrapper[4955]: I0202 13:19:59.977184 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5b4597c47b-whqbv"] Feb 02 13:20:00 crc kubenswrapper[4955]: I0202 13:20:00.014084 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-afac-account-create-update-kpmqz"] Feb 02 13:20:00 crc kubenswrapper[4955]: W0202 13:20:00.031951 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod342df619_1ebd_498c_9199_5c48a35fb732.slice/crio-26fd7fb8448e19ba38d460a5ee4931cf4c9fcb0aad03e137ec7fc603136fa787 WatchSource:0}: Error finding container 26fd7fb8448e19ba38d460a5ee4931cf4c9fcb0aad03e137ec7fc603136fa787: Status 404 returned error can't find the container with id 26fd7fb8448e19ba38d460a5ee4931cf4c9fcb0aad03e137ec7fc603136fa787 Feb 02 13:20:00 crc kubenswrapper[4955]: I0202 13:20:00.036734 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:20:00 crc kubenswrapper[4955]: I0202 13:20:00.114681 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:20:00 crc kubenswrapper[4955]: I0202 13:20:00.432293 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q76t2" event={"ID":"98ea679e-a44a-4bd0-867e-044542b96bbb","Type":"ContainerStarted","Data":"61f5984eed4d359d9f82a92f3bac2910a804f3e8a970b0840e4a9dddd450a4d9"} Feb 02 13:20:00 crc kubenswrapper[4955]: I0202 13:20:00.444228 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-afac-account-create-update-kpmqz" event={"ID":"c6eff7b8-c700-48bd-b71b-0343fca61cc4","Type":"ContainerStarted","Data":"9f6f14e1702e4e1cf536a7344879819993d65181b7ae31d4ffae4d3947171e62"} Feb 02 13:20:00 crc kubenswrapper[4955]: I0202 13:20:00.453310 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-q76t2" podStartSLOduration=3.45328842 podStartE2EDuration="3.45328842s" podCreationTimestamp="2026-02-02 13:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:20:00.446970148 +0000 UTC m=+1051.359306588" watchObservedRunningTime="2026-02-02 13:20:00.45328842 +0000 UTC m=+1051.365624870" Feb 02 13:20:00 crc kubenswrapper[4955]: I0202 13:20:00.456499 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mrzb9" event={"ID":"b5fb8423-7ae4-4515-8920-72d90de48d8e","Type":"ContainerStarted","Data":"54ae6550e82a61884ff85589327d624190bc43eca56aeaa92e6ab41ee52879c3"} Feb 02 13:20:00 crc kubenswrapper[4955]: I0202 13:20:00.456583 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mrzb9" event={"ID":"b5fb8423-7ae4-4515-8920-72d90de48d8e","Type":"ContainerStarted","Data":"e9593fa50165c1745050277c7e2e3f5937278086d985bb029858f028acce189b"} Feb 02 13:20:00 crc kubenswrapper[4955]: I0202 13:20:00.465721 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c3ee-account-create-update-7nl28" event={"ID":"67459792-2667-4acf-9ce1-6b715ce15a98","Type":"ContainerStarted","Data":"d589b6ddf71bd6ab03216e9219dcce37331fff24bd5d7c0cfa0d7b3b48404acf"} Feb 02 13:20:00 crc kubenswrapper[4955]: I0202 13:20:00.474101 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-9585f6f46-wl58s" event={"ID":"f8e3a0ef-22c1-4f58-bd10-175541f41d88","Type":"ContainerStarted","Data":"7b637d87d43d27e92038e65f80d77ef2842cd03923d9798b44be6157e02d23ca"} Feb 02 13:20:00 crc kubenswrapper[4955]: I0202 13:20:00.481063 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-mrzb9" podStartSLOduration=3.48104586 podStartE2EDuration="3.48104586s" podCreationTimestamp="2026-02-02 13:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:20:00.470808293 +0000 UTC m=+1051.383144743" watchObservedRunningTime="2026-02-02 13:20:00.48104586 +0000 UTC m=+1051.393382310" Feb 02 13:20:00 crc kubenswrapper[4955]: I0202 13:20:00.484442 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8dd674b7b-kd8rg" event={"ID":"247c58e7-931d-4356-a900-da1c877548cd","Type":"ContainerStarted","Data":"ec95f2640256f585f08389cfd2a0b516bdec0d2135fedc2c9e62767746afdb54"} Feb 02 13:20:00 crc kubenswrapper[4955]: I0202 13:20:00.488140 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"342df619-1ebd-498c-9199-5c48a35fb732","Type":"ContainerStarted","Data":"26fd7fb8448e19ba38d460a5ee4931cf4c9fcb0aad03e137ec7fc603136fa787"} Feb 02 13:20:00 crc kubenswrapper[4955]: I0202 13:20:00.492097 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-be3f-account-create-update-xvcsp" event={"ID":"8faf7286-4893-432d-bbe7-a431158357f9","Type":"ContainerStarted","Data":"2b2ed410488de04f9007269d824389f9cda8924c0c10470cdadda4b729e4f376"} Feb 02 13:20:00 crc kubenswrapper[4955]: I0202 13:20:00.492155 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-be3f-account-create-update-xvcsp" event={"ID":"8faf7286-4893-432d-bbe7-a431158357f9","Type":"ContainerStarted","Data":"5fe77086617dc20778eff1bc3bfc8dd76da3ff98a476785d1b25ecc315c50d7c"} Feb 02 13:20:00 crc kubenswrapper[4955]: I0202 13:20:00.494667 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5b4597c47b-whqbv" event={"ID":"5634670e-87b2-4c94-a877-853ad21f32b2","Type":"ContainerStarted","Data":"c2e14183e82e05f4ec2d9450493dac18f3e2ccd3ea2f9fb18127ff1d36930ef9"} Feb 02 13:20:00 crc kubenswrapper[4955]: I0202 13:20:00.495974 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f22f6239-6425-47b5-9e00-664ff50c02dc","Type":"ContainerStarted","Data":"28f40135dea3c5dcf5728e0efca02165c9c658b765271b60f49ce04486113a67"} Feb 02 13:20:00 crc kubenswrapper[4955]: I0202 13:20:00.505535 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-75hh6" event={"ID":"2a17c62e-18bb-4a12-9865-8d38c0b7102f","Type":"ContainerStarted","Data":"aa6503d311939a1619cad6338405cac1e44daa5d208dfd67560cb0732d7c0336"} Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.128287 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-78c78d7bb-4kb6m"] Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.144495 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-854d558954-hhsmv"] Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.162492 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-59594c5dbd-8r97t"] Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.164315 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-59594c5dbd-8r97t" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.169811 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-77c488fc4f-5xxjg"] Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.171794 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.173147 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.173528 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.173781 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.174037 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.193989 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-59594c5dbd-8r97t"] Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.211118 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-77c488fc4f-5xxjg"] Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.266546 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7babdce-fcbe-452c-ac21-041d0cebf985-combined-ca-bundle\") pod \"heat-cfnapi-77c488fc4f-5xxjg\" (UID: \"f7babdce-fcbe-452c-ac21-041d0cebf985\") " pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.266630 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7babdce-fcbe-452c-ac21-041d0cebf985-public-tls-certs\") pod \"heat-cfnapi-77c488fc4f-5xxjg\" (UID: \"f7babdce-fcbe-452c-ac21-041d0cebf985\") " pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.266659 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d242088-395a-4a37-abec-5a0a15e68d91-public-tls-certs\") pod \"heat-api-59594c5dbd-8r97t\" (UID: \"8d242088-395a-4a37-abec-5a0a15e68d91\") " pod="openstack/heat-api-59594c5dbd-8r97t" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.266717 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d242088-395a-4a37-abec-5a0a15e68d91-combined-ca-bundle\") pod \"heat-api-59594c5dbd-8r97t\" (UID: \"8d242088-395a-4a37-abec-5a0a15e68d91\") " pod="openstack/heat-api-59594c5dbd-8r97t" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.266861 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7babdce-fcbe-452c-ac21-041d0cebf985-config-data\") pod \"heat-cfnapi-77c488fc4f-5xxjg\" (UID: \"f7babdce-fcbe-452c-ac21-041d0cebf985\") " pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.266883 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d242088-395a-4a37-abec-5a0a15e68d91-config-data-custom\") pod \"heat-api-59594c5dbd-8r97t\" (UID: \"8d242088-395a-4a37-abec-5a0a15e68d91\") " pod="openstack/heat-api-59594c5dbd-8r97t" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.266910 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7babdce-fcbe-452c-ac21-041d0cebf985-config-data-custom\") pod \"heat-cfnapi-77c488fc4f-5xxjg\" (UID: \"f7babdce-fcbe-452c-ac21-041d0cebf985\") " pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.266933 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d242088-395a-4a37-abec-5a0a15e68d91-config-data\") pod \"heat-api-59594c5dbd-8r97t\" (UID: \"8d242088-395a-4a37-abec-5a0a15e68d91\") " pod="openstack/heat-api-59594c5dbd-8r97t" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.266983 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q994d\" (UniqueName: \"kubernetes.io/projected/f7babdce-fcbe-452c-ac21-041d0cebf985-kube-api-access-q994d\") pod \"heat-cfnapi-77c488fc4f-5xxjg\" (UID: \"f7babdce-fcbe-452c-ac21-041d0cebf985\") " pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.267012 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmq99\" (UniqueName: \"kubernetes.io/projected/8d242088-395a-4a37-abec-5a0a15e68d91-kube-api-access-jmq99\") pod \"heat-api-59594c5dbd-8r97t\" (UID: \"8d242088-395a-4a37-abec-5a0a15e68d91\") " pod="openstack/heat-api-59594c5dbd-8r97t" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.267044 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d242088-395a-4a37-abec-5a0a15e68d91-internal-tls-certs\") pod \"heat-api-59594c5dbd-8r97t\" (UID: \"8d242088-395a-4a37-abec-5a0a15e68d91\") " pod="openstack/heat-api-59594c5dbd-8r97t" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.267083 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7babdce-fcbe-452c-ac21-041d0cebf985-internal-tls-certs\") pod \"heat-cfnapi-77c488fc4f-5xxjg\" (UID: \"f7babdce-fcbe-452c-ac21-041d0cebf985\") " pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.368772 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7babdce-fcbe-452c-ac21-041d0cebf985-config-data\") pod \"heat-cfnapi-77c488fc4f-5xxjg\" (UID: \"f7babdce-fcbe-452c-ac21-041d0cebf985\") " pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.368823 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d242088-395a-4a37-abec-5a0a15e68d91-config-data-custom\") pod \"heat-api-59594c5dbd-8r97t\" (UID: \"8d242088-395a-4a37-abec-5a0a15e68d91\") " pod="openstack/heat-api-59594c5dbd-8r97t" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.368851 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7babdce-fcbe-452c-ac21-041d0cebf985-config-data-custom\") pod \"heat-cfnapi-77c488fc4f-5xxjg\" (UID: \"f7babdce-fcbe-452c-ac21-041d0cebf985\") " pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.368871 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d242088-395a-4a37-abec-5a0a15e68d91-config-data\") pod \"heat-api-59594c5dbd-8r97t\" (UID: \"8d242088-395a-4a37-abec-5a0a15e68d91\") " pod="openstack/heat-api-59594c5dbd-8r97t" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.368917 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q994d\" (UniqueName: \"kubernetes.io/projected/f7babdce-fcbe-452c-ac21-041d0cebf985-kube-api-access-q994d\") pod \"heat-cfnapi-77c488fc4f-5xxjg\" (UID: \"f7babdce-fcbe-452c-ac21-041d0cebf985\") " pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.368954 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmq99\" (UniqueName: \"kubernetes.io/projected/8d242088-395a-4a37-abec-5a0a15e68d91-kube-api-access-jmq99\") pod \"heat-api-59594c5dbd-8r97t\" (UID: \"8d242088-395a-4a37-abec-5a0a15e68d91\") " pod="openstack/heat-api-59594c5dbd-8r97t" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.368981 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d242088-395a-4a37-abec-5a0a15e68d91-internal-tls-certs\") pod \"heat-api-59594c5dbd-8r97t\" (UID: \"8d242088-395a-4a37-abec-5a0a15e68d91\") " pod="openstack/heat-api-59594c5dbd-8r97t" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.369011 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7babdce-fcbe-452c-ac21-041d0cebf985-internal-tls-certs\") pod \"heat-cfnapi-77c488fc4f-5xxjg\" (UID: \"f7babdce-fcbe-452c-ac21-041d0cebf985\") " pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.369042 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7babdce-fcbe-452c-ac21-041d0cebf985-combined-ca-bundle\") pod \"heat-cfnapi-77c488fc4f-5xxjg\" (UID: \"f7babdce-fcbe-452c-ac21-041d0cebf985\") " pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.369061 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7babdce-fcbe-452c-ac21-041d0cebf985-public-tls-certs\") pod \"heat-cfnapi-77c488fc4f-5xxjg\" (UID: \"f7babdce-fcbe-452c-ac21-041d0cebf985\") " pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.369118 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d242088-395a-4a37-abec-5a0a15e68d91-public-tls-certs\") pod \"heat-api-59594c5dbd-8r97t\" (UID: \"8d242088-395a-4a37-abec-5a0a15e68d91\") " pod="openstack/heat-api-59594c5dbd-8r97t" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.369157 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d242088-395a-4a37-abec-5a0a15e68d91-combined-ca-bundle\") pod \"heat-api-59594c5dbd-8r97t\" (UID: \"8d242088-395a-4a37-abec-5a0a15e68d91\") " pod="openstack/heat-api-59594c5dbd-8r97t" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.374582 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7babdce-fcbe-452c-ac21-041d0cebf985-config-data-custom\") pod \"heat-cfnapi-77c488fc4f-5xxjg\" (UID: \"f7babdce-fcbe-452c-ac21-041d0cebf985\") " pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.374686 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d242088-395a-4a37-abec-5a0a15e68d91-internal-tls-certs\") pod \"heat-api-59594c5dbd-8r97t\" (UID: \"8d242088-395a-4a37-abec-5a0a15e68d91\") " pod="openstack/heat-api-59594c5dbd-8r97t" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.376283 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d242088-395a-4a37-abec-5a0a15e68d91-combined-ca-bundle\") pod \"heat-api-59594c5dbd-8r97t\" (UID: \"8d242088-395a-4a37-abec-5a0a15e68d91\") " pod="openstack/heat-api-59594c5dbd-8r97t" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.378547 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7babdce-fcbe-452c-ac21-041d0cebf985-public-tls-certs\") pod \"heat-cfnapi-77c488fc4f-5xxjg\" (UID: \"f7babdce-fcbe-452c-ac21-041d0cebf985\") " pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.381412 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7babdce-fcbe-452c-ac21-041d0cebf985-combined-ca-bundle\") pod \"heat-cfnapi-77c488fc4f-5xxjg\" (UID: \"f7babdce-fcbe-452c-ac21-041d0cebf985\") " pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.381434 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7babdce-fcbe-452c-ac21-041d0cebf985-internal-tls-certs\") pod \"heat-cfnapi-77c488fc4f-5xxjg\" (UID: \"f7babdce-fcbe-452c-ac21-041d0cebf985\") " pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.385063 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d242088-395a-4a37-abec-5a0a15e68d91-config-data-custom\") pod \"heat-api-59594c5dbd-8r97t\" (UID: \"8d242088-395a-4a37-abec-5a0a15e68d91\") " pod="openstack/heat-api-59594c5dbd-8r97t" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.386476 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d242088-395a-4a37-abec-5a0a15e68d91-public-tls-certs\") pod \"heat-api-59594c5dbd-8r97t\" (UID: \"8d242088-395a-4a37-abec-5a0a15e68d91\") " pod="openstack/heat-api-59594c5dbd-8r97t" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.386750 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d242088-395a-4a37-abec-5a0a15e68d91-config-data\") pod \"heat-api-59594c5dbd-8r97t\" (UID: \"8d242088-395a-4a37-abec-5a0a15e68d91\") " pod="openstack/heat-api-59594c5dbd-8r97t" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.387491 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7babdce-fcbe-452c-ac21-041d0cebf985-config-data\") pod \"heat-cfnapi-77c488fc4f-5xxjg\" (UID: \"f7babdce-fcbe-452c-ac21-041d0cebf985\") " pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.398765 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmq99\" (UniqueName: \"kubernetes.io/projected/8d242088-395a-4a37-abec-5a0a15e68d91-kube-api-access-jmq99\") pod \"heat-api-59594c5dbd-8r97t\" (UID: \"8d242088-395a-4a37-abec-5a0a15e68d91\") " pod="openstack/heat-api-59594c5dbd-8r97t" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.405451 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q994d\" (UniqueName: \"kubernetes.io/projected/f7babdce-fcbe-452c-ac21-041d0cebf985-kube-api-access-q994d\") pod \"heat-cfnapi-77c488fc4f-5xxjg\" (UID: \"f7babdce-fcbe-452c-ac21-041d0cebf985\") " pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.505376 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-59594c5dbd-8r97t" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.520068 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c3ee-account-create-update-7nl28" event={"ID":"67459792-2667-4acf-9ce1-6b715ce15a98","Type":"ContainerStarted","Data":"8eefa9e33a962720d297e3051d63198af9ed29a47c4b1160460e30af54999844"} Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.523530 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-75hh6" event={"ID":"2a17c62e-18bb-4a12-9865-8d38c0b7102f","Type":"ContainerStarted","Data":"799246176d535c7b83e407a410833aabb1c68f0571ed47c263bea4e75b9fffa2"} Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.527204 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.558862 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-c3ee-account-create-update-7nl28" podStartSLOduration=4.558837948 podStartE2EDuration="4.558837948s" podCreationTimestamp="2026-02-02 13:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:20:01.542847423 +0000 UTC m=+1052.455183883" watchObservedRunningTime="2026-02-02 13:20:01.558837948 +0000 UTC m=+1052.471174398" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.564702 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-75hh6" podStartSLOduration=4.564683648 podStartE2EDuration="4.564683648s" podCreationTimestamp="2026-02-02 13:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:20:01.556188874 +0000 UTC m=+1052.468525344" watchObservedRunningTime="2026-02-02 13:20:01.564683648 +0000 UTC m=+1052.477020088" Feb 02 13:20:01 crc kubenswrapper[4955]: I0202 13:20:01.573791 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-be3f-account-create-update-xvcsp" podStartSLOduration=4.573773738 podStartE2EDuration="4.573773738s" podCreationTimestamp="2026-02-02 13:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:20:01.571284798 +0000 UTC m=+1052.483621248" watchObservedRunningTime="2026-02-02 13:20:01.573773738 +0000 UTC m=+1052.486110188" Feb 02 13:20:02 crc kubenswrapper[4955]: I0202 13:20:02.562992 4955 generic.go:334] "Generic (PLEG): container finished" podID="2a17c62e-18bb-4a12-9865-8d38c0b7102f" containerID="799246176d535c7b83e407a410833aabb1c68f0571ed47c263bea4e75b9fffa2" exitCode=0 Feb 02 13:20:02 crc kubenswrapper[4955]: I0202 13:20:02.563214 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-75hh6" event={"ID":"2a17c62e-18bb-4a12-9865-8d38c0b7102f","Type":"ContainerDied","Data":"799246176d535c7b83e407a410833aabb1c68f0571ed47c263bea4e75b9fffa2"} Feb 02 13:20:02 crc kubenswrapper[4955]: I0202 13:20:02.571542 4955 generic.go:334] "Generic (PLEG): container finished" podID="98ea679e-a44a-4bd0-867e-044542b96bbb" containerID="61f5984eed4d359d9f82a92f3bac2910a804f3e8a970b0840e4a9dddd450a4d9" exitCode=0 Feb 02 13:20:02 crc kubenswrapper[4955]: I0202 13:20:02.571643 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q76t2" event={"ID":"98ea679e-a44a-4bd0-867e-044542b96bbb","Type":"ContainerDied","Data":"61f5984eed4d359d9f82a92f3bac2910a804f3e8a970b0840e4a9dddd450a4d9"} Feb 02 13:20:02 crc kubenswrapper[4955]: I0202 13:20:02.591653 4955 generic.go:334] "Generic (PLEG): container finished" podID="b5fb8423-7ae4-4515-8920-72d90de48d8e" containerID="54ae6550e82a61884ff85589327d624190bc43eca56aeaa92e6ab41ee52879c3" exitCode=0 Feb 02 13:20:02 crc kubenswrapper[4955]: I0202 13:20:02.591758 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mrzb9" event={"ID":"b5fb8423-7ae4-4515-8920-72d90de48d8e","Type":"ContainerDied","Data":"54ae6550e82a61884ff85589327d624190bc43eca56aeaa92e6ab41ee52879c3"} Feb 02 13:20:02 crc kubenswrapper[4955]: I0202 13:20:02.612108 4955 generic.go:334] "Generic (PLEG): container finished" podID="67459792-2667-4acf-9ce1-6b715ce15a98" containerID="8eefa9e33a962720d297e3051d63198af9ed29a47c4b1160460e30af54999844" exitCode=0 Feb 02 13:20:02 crc kubenswrapper[4955]: I0202 13:20:02.612230 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c3ee-account-create-update-7nl28" event={"ID":"67459792-2667-4acf-9ce1-6b715ce15a98","Type":"ContainerDied","Data":"8eefa9e33a962720d297e3051d63198af9ed29a47c4b1160460e30af54999844"} Feb 02 13:20:02 crc kubenswrapper[4955]: I0202 13:20:02.618281 4955 generic.go:334] "Generic (PLEG): container finished" podID="8faf7286-4893-432d-bbe7-a431158357f9" containerID="2b2ed410488de04f9007269d824389f9cda8924c0c10470cdadda4b729e4f376" exitCode=0 Feb 02 13:20:02 crc kubenswrapper[4955]: I0202 13:20:02.618378 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-be3f-account-create-update-xvcsp" event={"ID":"8faf7286-4893-432d-bbe7-a431158357f9","Type":"ContainerDied","Data":"2b2ed410488de04f9007269d824389f9cda8924c0c10470cdadda4b729e4f376"} Feb 02 13:20:02 crc kubenswrapper[4955]: I0202 13:20:02.657528 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-9585f6f46-wl58s" Feb 02 13:20:02 crc kubenswrapper[4955]: I0202 13:20:02.697815 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-9585f6f46-wl58s" podStartSLOduration=5.69779644 podStartE2EDuration="5.69779644s" podCreationTimestamp="2026-02-02 13:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:20:02.689571733 +0000 UTC m=+1053.601908183" watchObservedRunningTime="2026-02-02 13:20:02.69779644 +0000 UTC m=+1053.610132890" Feb 02 13:20:02 crc kubenswrapper[4955]: I0202 13:20:02.728380 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-59594c5dbd-8r97t"] Feb 02 13:20:02 crc kubenswrapper[4955]: I0202 13:20:02.850505 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-77c488fc4f-5xxjg"] Feb 02 13:20:02 crc kubenswrapper[4955]: W0202 13:20:02.912596 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7babdce_fcbe_452c_ac21_041d0cebf985.slice/crio-7d692f51af9d339bf4ab11cc34766b85a2a8b9adb2324620a51cbb96afbf7b67 WatchSource:0}: Error finding container 7d692f51af9d339bf4ab11cc34766b85a2a8b9adb2324620a51cbb96afbf7b67: Status 404 returned error can't find the container with id 7d692f51af9d339bf4ab11cc34766b85a2a8b9adb2324620a51cbb96afbf7b67 Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.689594 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"587dcc9e-137d-4dd7-9a87-024e6d09b1e1","Type":"ContainerStarted","Data":"2a3acac6e42104ce09516cfd11130e1d0a173b157625db8902fe1784bacb3136"} Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.693126 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f22f6239-6425-47b5-9e00-664ff50c02dc","Type":"ContainerStarted","Data":"e82512424873240bf26590486813a05426b9d2f00cd0c90b67c95f36e1b38e8f"} Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.695463 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-854d558954-hhsmv" event={"ID":"56b43419-bf45-4850-996c-276b31e090d3","Type":"ContainerStarted","Data":"d7339dfd5154e45d4ed2529b92f07e5b622da446626ba9631d7795666ba362cb"} Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.695594 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-854d558954-hhsmv" podUID="56b43419-bf45-4850-996c-276b31e090d3" containerName="heat-cfnapi" containerID="cri-o://d7339dfd5154e45d4ed2529b92f07e5b622da446626ba9631d7795666ba362cb" gracePeriod=60 Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.695900 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-854d558954-hhsmv" Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.700131 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-78c78d7bb-4kb6m" podUID="3b6e5ec6-9488-4be9-852d-defd9556b4ca" containerName="heat-api" containerID="cri-o://bbfd992b3f603f3c361f4a632b631e37e3e5bc99c4a57394bab2f722fcd3dc5f" gracePeriod=60 Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.700325 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78c78d7bb-4kb6m" event={"ID":"3b6e5ec6-9488-4be9-852d-defd9556b4ca","Type":"ContainerStarted","Data":"bbfd992b3f603f3c361f4a632b631e37e3e5bc99c4a57394bab2f722fcd3dc5f"} Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.700370 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-78c78d7bb-4kb6m" Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.706737 4955 generic.go:334] "Generic (PLEG): container finished" podID="247c58e7-931d-4356-a900-da1c877548cd" containerID="a42783586119478a64e62468a02ed19265730f5baf42f352b95f6d02691375c5" exitCode=1 Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.706796 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8dd674b7b-kd8rg" event={"ID":"247c58e7-931d-4356-a900-da1c877548cd","Type":"ContainerDied","Data":"a42783586119478a64e62468a02ed19265730f5baf42f352b95f6d02691375c5"} Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.707484 4955 scope.go:117] "RemoveContainer" containerID="a42783586119478a64e62468a02ed19265730f5baf42f352b95f6d02691375c5" Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.718343 4955 generic.go:334] "Generic (PLEG): container finished" podID="c6eff7b8-c700-48bd-b71b-0343fca61cc4" containerID="0cac018f6ddd7d5d41b52cb36a7aa0e9c4156c9f80cf94646cd6290efd35ed0e" exitCode=0 Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.734774 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-854d558954-hhsmv" podStartSLOduration=7.380268256 podStartE2EDuration="12.734756543s" podCreationTimestamp="2026-02-02 13:19:51 +0000 UTC" firstStartedPulling="2026-02-02 13:19:56.9043893 +0000 UTC m=+1047.816725760" lastFinishedPulling="2026-02-02 13:20:02.258877587 +0000 UTC m=+1053.171214047" observedRunningTime="2026-02-02 13:20:03.714904515 +0000 UTC m=+1054.627240955" watchObservedRunningTime="2026-02-02 13:20:03.734756543 +0000 UTC m=+1054.647092993" Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.751111 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-afac-account-create-update-kpmqz" event={"ID":"c6eff7b8-c700-48bd-b71b-0343fca61cc4","Type":"ContainerDied","Data":"0cac018f6ddd7d5d41b52cb36a7aa0e9c4156c9f80cf94646cd6290efd35ed0e"} Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.751163 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" event={"ID":"f7babdce-fcbe-452c-ac21-041d0cebf985","Type":"ContainerStarted","Data":"7d692f51af9d339bf4ab11cc34766b85a2a8b9adb2324620a51cbb96afbf7b67"} Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.767877 4955 generic.go:334] "Generic (PLEG): container finished" podID="5634670e-87b2-4c94-a877-853ad21f32b2" containerID="8f6ae369df54f3c4ffbfbc36da4dff83b0810db8c65ab2b9fc500f0f6bfc193d" exitCode=1 Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.768423 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5b4597c47b-whqbv" event={"ID":"5634670e-87b2-4c94-a877-853ad21f32b2","Type":"ContainerDied","Data":"8f6ae369df54f3c4ffbfbc36da4dff83b0810db8c65ab2b9fc500f0f6bfc193d"} Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.769437 4955 scope.go:117] "RemoveContainer" containerID="8f6ae369df54f3c4ffbfbc36da4dff83b0810db8c65ab2b9fc500f0f6bfc193d" Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.786624 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-78c78d7bb-4kb6m" podStartSLOduration=7.434918804 podStartE2EDuration="12.786602744s" podCreationTimestamp="2026-02-02 13:19:51 +0000 UTC" firstStartedPulling="2026-02-02 13:19:56.901670074 +0000 UTC m=+1047.814006524" lastFinishedPulling="2026-02-02 13:20:02.253354014 +0000 UTC m=+1053.165690464" observedRunningTime="2026-02-02 13:20:03.767851952 +0000 UTC m=+1054.680188402" watchObservedRunningTime="2026-02-02 13:20:03.786602744 +0000 UTC m=+1054.698939194" Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.789462 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-59594c5dbd-8r97t" event={"ID":"8d242088-395a-4a37-abec-5a0a15e68d91","Type":"ContainerStarted","Data":"d87c8fbb777b2f763702118469b453c2c83523874c83a13a8e2335168d5c3a30"} Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.789500 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-59594c5dbd-8r97t" event={"ID":"8d242088-395a-4a37-abec-5a0a15e68d91","Type":"ContainerStarted","Data":"879650fd0847982cee48d0ec17cf60104a902a803e30bdf2358e9a8e0b3d7e73"} Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.790075 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-59594c5dbd-8r97t" Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.791297 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-9585f6f46-wl58s" event={"ID":"f8e3a0ef-22c1-4f58-bd10-175541f41d88","Type":"ContainerStarted","Data":"f873f866187dec5310ce65bf80fcb6861a9664a2a100304c34c0369682e0e835"} Feb 02 13:20:03 crc kubenswrapper[4955]: I0202 13:20:03.856681 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-59594c5dbd-8r97t" podStartSLOduration=2.856653713 podStartE2EDuration="2.856653713s" podCreationTimestamp="2026-02-02 13:20:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:20:03.846892597 +0000 UTC m=+1054.759229047" watchObservedRunningTime="2026-02-02 13:20:03.856653713 +0000 UTC m=+1054.768990163" Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.011961 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.025202 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-79fb55657c-85sjk" Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.614841 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-be3f-account-create-update-xvcsp" Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.689306 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpqth\" (UniqueName: \"kubernetes.io/projected/8faf7286-4893-432d-bbe7-a431158357f9-kube-api-access-wpqth\") pod \"8faf7286-4893-432d-bbe7-a431158357f9\" (UID: \"8faf7286-4893-432d-bbe7-a431158357f9\") " Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.689488 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8faf7286-4893-432d-bbe7-a431158357f9-operator-scripts\") pod \"8faf7286-4893-432d-bbe7-a431158357f9\" (UID: \"8faf7286-4893-432d-bbe7-a431158357f9\") " Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.689998 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8faf7286-4893-432d-bbe7-a431158357f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8faf7286-4893-432d-bbe7-a431158357f9" (UID: "8faf7286-4893-432d-bbe7-a431158357f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.690779 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8faf7286-4893-432d-bbe7-a431158357f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.704448 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8faf7286-4893-432d-bbe7-a431158357f9-kube-api-access-wpqth" (OuterVolumeSpecName: "kube-api-access-wpqth") pod "8faf7286-4893-432d-bbe7-a431158357f9" (UID: "8faf7286-4893-432d-bbe7-a431158357f9"). InnerVolumeSpecName "kube-api-access-wpqth". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.801337 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpqth\" (UniqueName: \"kubernetes.io/projected/8faf7286-4893-432d-bbe7-a431158357f9-kube-api-access-wpqth\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.852618 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"342df619-1ebd-498c-9199-5c48a35fb732","Type":"ContainerStarted","Data":"8e792c0ba64fcc36a277fcda7db418d6d8d5906e522733b2fe2e98172b984576"} Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.888013 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" event={"ID":"f7babdce-fcbe-452c-ac21-041d0cebf985","Type":"ContainerStarted","Data":"c86549bb8cb75749818f28bcbadc453eff1c1ba7387b517051dd4528d2409f3b"} Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.889192 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.897147 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mrzb9" Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.903861 4955 generic.go:334] "Generic (PLEG): container finished" podID="5634670e-87b2-4c94-a877-853ad21f32b2" containerID="725a9eb80f6bef0f30854f3da20591d4448d23a1428e611902dcd3573f9d6356" exitCode=1 Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.903936 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5b4597c47b-whqbv" event={"ID":"5634670e-87b2-4c94-a877-853ad21f32b2","Type":"ContainerDied","Data":"725a9eb80f6bef0f30854f3da20591d4448d23a1428e611902dcd3573f9d6356"} Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.903975 4955 scope.go:117] "RemoveContainer" containerID="8f6ae369df54f3c4ffbfbc36da4dff83b0810db8c65ab2b9fc500f0f6bfc193d" Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.904484 4955 scope.go:117] "RemoveContainer" containerID="725a9eb80f6bef0f30854f3da20591d4448d23a1428e611902dcd3573f9d6356" Feb 02 13:20:04 crc kubenswrapper[4955]: E0202 13:20:04.904748 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5b4597c47b-whqbv_openstack(5634670e-87b2-4c94-a877-853ad21f32b2)\"" pod="openstack/heat-cfnapi-5b4597c47b-whqbv" podUID="5634670e-87b2-4c94-a877-853ad21f32b2" Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.919475 4955 generic.go:334] "Generic (PLEG): container finished" podID="247c58e7-931d-4356-a900-da1c877548cd" containerID="abce91f3f51757305a9f4bd1e85e59522f6cb99b89ca9bcf0b57349db3502787" exitCode=1 Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.919614 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8dd674b7b-kd8rg" event={"ID":"247c58e7-931d-4356-a900-da1c877548cd","Type":"ContainerDied","Data":"abce91f3f51757305a9f4bd1e85e59522f6cb99b89ca9bcf0b57349db3502787"} Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.920088 4955 scope.go:117] "RemoveContainer" containerID="abce91f3f51757305a9f4bd1e85e59522f6cb99b89ca9bcf0b57349db3502787" Feb 02 13:20:04 crc kubenswrapper[4955]: E0202 13:20:04.920300 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8dd674b7b-kd8rg_openstack(247c58e7-931d-4356-a900-da1c877548cd)\"" pod="openstack/heat-api-8dd674b7b-kd8rg" podUID="247c58e7-931d-4356-a900-da1c877548cd" Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.921817 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c3ee-account-create-update-7nl28" Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.929890 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" podStartSLOduration=3.9298709609999998 podStartE2EDuration="3.929870961s" podCreationTimestamp="2026-02-02 13:20:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:20:04.918988998 +0000 UTC m=+1055.831325448" watchObservedRunningTime="2026-02-02 13:20:04.929870961 +0000 UTC m=+1055.842207411" Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.933841 4955 generic.go:334] "Generic (PLEG): container finished" podID="3b6e5ec6-9488-4be9-852d-defd9556b4ca" containerID="bbfd992b3f603f3c361f4a632b631e37e3e5bc99c4a57394bab2f722fcd3dc5f" exitCode=0 Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.933919 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78c78d7bb-4kb6m" event={"ID":"3b6e5ec6-9488-4be9-852d-defd9556b4ca","Type":"ContainerDied","Data":"bbfd992b3f603f3c361f4a632b631e37e3e5bc99c4a57394bab2f722fcd3dc5f"} Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.986308 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-be3f-account-create-update-xvcsp" event={"ID":"8faf7286-4893-432d-bbe7-a431158357f9","Type":"ContainerDied","Data":"5fe77086617dc20778eff1bc3bfc8dd76da3ff98a476785d1b25ecc315c50d7c"} Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.986354 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fe77086617dc20778eff1bc3bfc8dd76da3ff98a476785d1b25ecc315c50d7c" Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.986428 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-be3f-account-create-update-xvcsp" Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.989881 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q76t2" Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.993665 4955 scope.go:117] "RemoveContainer" containerID="a42783586119478a64e62468a02ed19265730f5baf42f352b95f6d02691375c5" Feb 02 13:20:04 crc kubenswrapper[4955]: I0202 13:20:04.994971 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-75hh6" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.017426 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98ea679e-a44a-4bd0-867e-044542b96bbb-operator-scripts\") pod \"98ea679e-a44a-4bd0-867e-044542b96bbb\" (UID: \"98ea679e-a44a-4bd0-867e-044542b96bbb\") " Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.017474 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5fb8423-7ae4-4515-8920-72d90de48d8e-operator-scripts\") pod \"b5fb8423-7ae4-4515-8920-72d90de48d8e\" (UID: \"b5fb8423-7ae4-4515-8920-72d90de48d8e\") " Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.017497 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trq8c\" (UniqueName: \"kubernetes.io/projected/98ea679e-a44a-4bd0-867e-044542b96bbb-kube-api-access-trq8c\") pod \"98ea679e-a44a-4bd0-867e-044542b96bbb\" (UID: \"98ea679e-a44a-4bd0-867e-044542b96bbb\") " Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.017547 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf6qt\" (UniqueName: \"kubernetes.io/projected/2a17c62e-18bb-4a12-9865-8d38c0b7102f-kube-api-access-bf6qt\") pod \"2a17c62e-18bb-4a12-9865-8d38c0b7102f\" (UID: \"2a17c62e-18bb-4a12-9865-8d38c0b7102f\") " Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.019038 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g6mg\" (UniqueName: \"kubernetes.io/projected/67459792-2667-4acf-9ce1-6b715ce15a98-kube-api-access-5g6mg\") pod \"67459792-2667-4acf-9ce1-6b715ce15a98\" (UID: \"67459792-2667-4acf-9ce1-6b715ce15a98\") " Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.019096 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67459792-2667-4acf-9ce1-6b715ce15a98-operator-scripts\") pod \"67459792-2667-4acf-9ce1-6b715ce15a98\" (UID: \"67459792-2667-4acf-9ce1-6b715ce15a98\") " Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.019131 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zprfx\" (UniqueName: \"kubernetes.io/projected/b5fb8423-7ae4-4515-8920-72d90de48d8e-kube-api-access-zprfx\") pod \"b5fb8423-7ae4-4515-8920-72d90de48d8e\" (UID: \"b5fb8423-7ae4-4515-8920-72d90de48d8e\") " Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.019203 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a17c62e-18bb-4a12-9865-8d38c0b7102f-operator-scripts\") pod \"2a17c62e-18bb-4a12-9865-8d38c0b7102f\" (UID: \"2a17c62e-18bb-4a12-9865-8d38c0b7102f\") " Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.017970 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5fb8423-7ae4-4515-8920-72d90de48d8e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b5fb8423-7ae4-4515-8920-72d90de48d8e" (UID: "b5fb8423-7ae4-4515-8920-72d90de48d8e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.018307 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98ea679e-a44a-4bd0-867e-044542b96bbb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98ea679e-a44a-4bd0-867e-044542b96bbb" (UID: "98ea679e-a44a-4bd0-867e-044542b96bbb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.021622 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67459792-2667-4acf-9ce1-6b715ce15a98-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67459792-2667-4acf-9ce1-6b715ce15a98" (UID: "67459792-2667-4acf-9ce1-6b715ce15a98"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.022134 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a17c62e-18bb-4a12-9865-8d38c0b7102f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a17c62e-18bb-4a12-9865-8d38c0b7102f" (UID: "2a17c62e-18bb-4a12-9865-8d38c0b7102f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.024014 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67459792-2667-4acf-9ce1-6b715ce15a98-kube-api-access-5g6mg" (OuterVolumeSpecName: "kube-api-access-5g6mg") pod "67459792-2667-4acf-9ce1-6b715ce15a98" (UID: "67459792-2667-4acf-9ce1-6b715ce15a98"). InnerVolumeSpecName "kube-api-access-5g6mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.024769 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f22f6239-6425-47b5-9e00-664ff50c02dc","Type":"ContainerStarted","Data":"ac0192ce0e255b61a88137c1f0d9fbc7a716dde787e6839345391072af9dc598"} Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.031255 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98ea679e-a44a-4bd0-867e-044542b96bbb-kube-api-access-trq8c" (OuterVolumeSpecName: "kube-api-access-trq8c") pod "98ea679e-a44a-4bd0-867e-044542b96bbb" (UID: "98ea679e-a44a-4bd0-867e-044542b96bbb"). InnerVolumeSpecName "kube-api-access-trq8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.035103 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a17c62e-18bb-4a12-9865-8d38c0b7102f-kube-api-access-bf6qt" (OuterVolumeSpecName: "kube-api-access-bf6qt") pod "2a17c62e-18bb-4a12-9865-8d38c0b7102f" (UID: "2a17c62e-18bb-4a12-9865-8d38c0b7102f"). InnerVolumeSpecName "kube-api-access-bf6qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.035483 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5fb8423-7ae4-4515-8920-72d90de48d8e-kube-api-access-zprfx" (OuterVolumeSpecName: "kube-api-access-zprfx") pod "b5fb8423-7ae4-4515-8920-72d90de48d8e" (UID: "b5fb8423-7ae4-4515-8920-72d90de48d8e"). InnerVolumeSpecName "kube-api-access-zprfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.053781 4955 generic.go:334] "Generic (PLEG): container finished" podID="56b43419-bf45-4850-996c-276b31e090d3" containerID="d7339dfd5154e45d4ed2529b92f07e5b622da446626ba9631d7795666ba362cb" exitCode=0 Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.053887 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-854d558954-hhsmv" event={"ID":"56b43419-bf45-4850-996c-276b31e090d3","Type":"ContainerDied","Data":"d7339dfd5154e45d4ed2529b92f07e5b622da446626ba9631d7795666ba362cb"} Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.098479 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.098461625 podStartE2EDuration="7.098461625s" podCreationTimestamp="2026-02-02 13:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:20:05.087533432 +0000 UTC m=+1055.999869882" watchObservedRunningTime="2026-02-02 13:20:05.098461625 +0000 UTC m=+1056.010798075" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.112849 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"587dcc9e-137d-4dd7-9a87-024e6d09b1e1","Type":"ContainerStarted","Data":"5d867ecf4bf9af8c5f7b065eaef6d3d43eec9bcf0679f70e2ab8f6646c30abdb"} Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.112895 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"587dcc9e-137d-4dd7-9a87-024e6d09b1e1","Type":"ContainerStarted","Data":"66b2ef6027d020f3a4c0868f677be03821d6fa81483c5f7060d390a8c90c398c"} Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.125200 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67459792-2667-4acf-9ce1-6b715ce15a98-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.125231 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zprfx\" (UniqueName: \"kubernetes.io/projected/b5fb8423-7ae4-4515-8920-72d90de48d8e-kube-api-access-zprfx\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.125243 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a17c62e-18bb-4a12-9865-8d38c0b7102f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.125254 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98ea679e-a44a-4bd0-867e-044542b96bbb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.125262 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5fb8423-7ae4-4515-8920-72d90de48d8e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.125271 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trq8c\" (UniqueName: \"kubernetes.io/projected/98ea679e-a44a-4bd0-867e-044542b96bbb-kube-api-access-trq8c\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.125279 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf6qt\" (UniqueName: \"kubernetes.io/projected/2a17c62e-18bb-4a12-9865-8d38c0b7102f-kube-api-access-bf6qt\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.125286 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g6mg\" (UniqueName: \"kubernetes.io/projected/67459792-2667-4acf-9ce1-6b715ce15a98-kube-api-access-5g6mg\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.304224 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-854d558954-hhsmv" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.314839 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78c78d7bb-4kb6m" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.430980 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56b43419-bf45-4850-996c-276b31e090d3-config-data-custom\") pod \"56b43419-bf45-4850-996c-276b31e090d3\" (UID: \"56b43419-bf45-4850-996c-276b31e090d3\") " Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.431111 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56b43419-bf45-4850-996c-276b31e090d3-combined-ca-bundle\") pod \"56b43419-bf45-4850-996c-276b31e090d3\" (UID: \"56b43419-bf45-4850-996c-276b31e090d3\") " Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.431183 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b6e5ec6-9488-4be9-852d-defd9556b4ca-config-data\") pod \"3b6e5ec6-9488-4be9-852d-defd9556b4ca\" (UID: \"3b6e5ec6-9488-4be9-852d-defd9556b4ca\") " Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.431293 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhs9t\" (UniqueName: \"kubernetes.io/projected/3b6e5ec6-9488-4be9-852d-defd9556b4ca-kube-api-access-bhs9t\") pod \"3b6e5ec6-9488-4be9-852d-defd9556b4ca\" (UID: \"3b6e5ec6-9488-4be9-852d-defd9556b4ca\") " Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.431340 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56b43419-bf45-4850-996c-276b31e090d3-config-data\") pod \"56b43419-bf45-4850-996c-276b31e090d3\" (UID: \"56b43419-bf45-4850-996c-276b31e090d3\") " Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.431360 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b6e5ec6-9488-4be9-852d-defd9556b4ca-combined-ca-bundle\") pod \"3b6e5ec6-9488-4be9-852d-defd9556b4ca\" (UID: \"3b6e5ec6-9488-4be9-852d-defd9556b4ca\") " Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.431470 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b6e5ec6-9488-4be9-852d-defd9556b4ca-config-data-custom\") pod \"3b6e5ec6-9488-4be9-852d-defd9556b4ca\" (UID: \"3b6e5ec6-9488-4be9-852d-defd9556b4ca\") " Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.431501 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrdpv\" (UniqueName: \"kubernetes.io/projected/56b43419-bf45-4850-996c-276b31e090d3-kube-api-access-vrdpv\") pod \"56b43419-bf45-4850-996c-276b31e090d3\" (UID: \"56b43419-bf45-4850-996c-276b31e090d3\") " Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.449693 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56b43419-bf45-4850-996c-276b31e090d3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "56b43419-bf45-4850-996c-276b31e090d3" (UID: "56b43419-bf45-4850-996c-276b31e090d3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.450810 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b6e5ec6-9488-4be9-852d-defd9556b4ca-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3b6e5ec6-9488-4be9-852d-defd9556b4ca" (UID: "3b6e5ec6-9488-4be9-852d-defd9556b4ca"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.452694 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b6e5ec6-9488-4be9-852d-defd9556b4ca-kube-api-access-bhs9t" (OuterVolumeSpecName: "kube-api-access-bhs9t") pod "3b6e5ec6-9488-4be9-852d-defd9556b4ca" (UID: "3b6e5ec6-9488-4be9-852d-defd9556b4ca"). InnerVolumeSpecName "kube-api-access-bhs9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.460469 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56b43419-bf45-4850-996c-276b31e090d3-kube-api-access-vrdpv" (OuterVolumeSpecName: "kube-api-access-vrdpv") pod "56b43419-bf45-4850-996c-276b31e090d3" (UID: "56b43419-bf45-4850-996c-276b31e090d3"). InnerVolumeSpecName "kube-api-access-vrdpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.486397 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b6e5ec6-9488-4be9-852d-defd9556b4ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b6e5ec6-9488-4be9-852d-defd9556b4ca" (UID: "3b6e5ec6-9488-4be9-852d-defd9556b4ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.503318 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b6e5ec6-9488-4be9-852d-defd9556b4ca-config-data" (OuterVolumeSpecName: "config-data") pod "3b6e5ec6-9488-4be9-852d-defd9556b4ca" (UID: "3b6e5ec6-9488-4be9-852d-defd9556b4ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.520325 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56b43419-bf45-4850-996c-276b31e090d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56b43419-bf45-4850-996c-276b31e090d3" (UID: "56b43419-bf45-4850-996c-276b31e090d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.536263 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b6e5ec6-9488-4be9-852d-defd9556b4ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.536299 4955 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b6e5ec6-9488-4be9-852d-defd9556b4ca-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.536314 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrdpv\" (UniqueName: \"kubernetes.io/projected/56b43419-bf45-4850-996c-276b31e090d3-kube-api-access-vrdpv\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.536327 4955 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56b43419-bf45-4850-996c-276b31e090d3-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.536337 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56b43419-bf45-4850-996c-276b31e090d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.536347 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b6e5ec6-9488-4be9-852d-defd9556b4ca-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.536355 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhs9t\" (UniqueName: \"kubernetes.io/projected/3b6e5ec6-9488-4be9-852d-defd9556b4ca-kube-api-access-bhs9t\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.567773 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56b43419-bf45-4850-996c-276b31e090d3-config-data" (OuterVolumeSpecName: "config-data") pod "56b43419-bf45-4850-996c-276b31e090d3" (UID: "56b43419-bf45-4850-996c-276b31e090d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.637858 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56b43419-bf45-4850-996c-276b31e090d3-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.669371 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-afac-account-create-update-kpmqz" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.840165 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6eff7b8-c700-48bd-b71b-0343fca61cc4-operator-scripts\") pod \"c6eff7b8-c700-48bd-b71b-0343fca61cc4\" (UID: \"c6eff7b8-c700-48bd-b71b-0343fca61cc4\") " Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.840422 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4rvr\" (UniqueName: \"kubernetes.io/projected/c6eff7b8-c700-48bd-b71b-0343fca61cc4-kube-api-access-t4rvr\") pod \"c6eff7b8-c700-48bd-b71b-0343fca61cc4\" (UID: \"c6eff7b8-c700-48bd-b71b-0343fca61cc4\") " Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.842259 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6eff7b8-c700-48bd-b71b-0343fca61cc4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6eff7b8-c700-48bd-b71b-0343fca61cc4" (UID: "c6eff7b8-c700-48bd-b71b-0343fca61cc4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.845855 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6eff7b8-c700-48bd-b71b-0343fca61cc4-kube-api-access-t4rvr" (OuterVolumeSpecName: "kube-api-access-t4rvr") pod "c6eff7b8-c700-48bd-b71b-0343fca61cc4" (UID: "c6eff7b8-c700-48bd-b71b-0343fca61cc4"). InnerVolumeSpecName "kube-api-access-t4rvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.943344 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4rvr\" (UniqueName: \"kubernetes.io/projected/c6eff7b8-c700-48bd-b71b-0343fca61cc4-kube-api-access-t4rvr\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:05 crc kubenswrapper[4955]: I0202 13:20:05.943380 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6eff7b8-c700-48bd-b71b-0343fca61cc4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.121017 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q76t2" Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.121019 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q76t2" event={"ID":"98ea679e-a44a-4bd0-867e-044542b96bbb","Type":"ContainerDied","Data":"bba1f6e1a4f2bde0c184195a2111b49457d3e31f630426da1b373afcc17a34aa"} Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.121136 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bba1f6e1a4f2bde0c184195a2111b49457d3e31f630426da1b373afcc17a34aa" Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.122247 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-854d558954-hhsmv" event={"ID":"56b43419-bf45-4850-996c-276b31e090d3","Type":"ContainerDied","Data":"097cb7adc22ef71ed162256e951abea0719c8c37397b37dd00fa9a5202129451"} Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.122267 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-854d558954-hhsmv" Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.122305 4955 scope.go:117] "RemoveContainer" containerID="d7339dfd5154e45d4ed2529b92f07e5b622da446626ba9631d7795666ba362cb" Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.123825 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"342df619-1ebd-498c-9199-5c48a35fb732","Type":"ContainerStarted","Data":"1214848d480e211a3432b5c18323dda020c7cb8455f8a7d90fca3de6a8149d4b"} Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.125855 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-afac-account-create-update-kpmqz" event={"ID":"c6eff7b8-c700-48bd-b71b-0343fca61cc4","Type":"ContainerDied","Data":"9f6f14e1702e4e1cf536a7344879819993d65181b7ae31d4ffae4d3947171e62"} Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.125883 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f6f14e1702e4e1cf536a7344879819993d65181b7ae31d4ffae4d3947171e62" Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.125922 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-afac-account-create-update-kpmqz" Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.135740 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mrzb9" event={"ID":"b5fb8423-7ae4-4515-8920-72d90de48d8e","Type":"ContainerDied","Data":"e9593fa50165c1745050277c7e2e3f5937278086d985bb029858f028acce189b"} Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.135779 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9593fa50165c1745050277c7e2e3f5937278086d985bb029858f028acce189b" Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.135847 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mrzb9" Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.151078 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.151052696 podStartE2EDuration="8.151052696s" podCreationTimestamp="2026-02-02 13:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:20:06.141923505 +0000 UTC m=+1057.054259955" watchObservedRunningTime="2026-02-02 13:20:06.151052696 +0000 UTC m=+1057.063389146" Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.156524 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c3ee-account-create-update-7nl28" event={"ID":"67459792-2667-4acf-9ce1-6b715ce15a98","Type":"ContainerDied","Data":"d589b6ddf71bd6ab03216e9219dcce37331fff24bd5d7c0cfa0d7b3b48404acf"} Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.156580 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d589b6ddf71bd6ab03216e9219dcce37331fff24bd5d7c0cfa0d7b3b48404acf" Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.156709 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c3ee-account-create-update-7nl28" Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.169750 4955 scope.go:117] "RemoveContainer" containerID="725a9eb80f6bef0f30854f3da20591d4448d23a1428e611902dcd3573f9d6356" Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.170036 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-75hh6" event={"ID":"2a17c62e-18bb-4a12-9865-8d38c0b7102f","Type":"ContainerDied","Data":"aa6503d311939a1619cad6338405cac1e44daa5d208dfd67560cb0732d7c0336"} Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.170091 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa6503d311939a1619cad6338405cac1e44daa5d208dfd67560cb0732d7c0336" Feb 02 13:20:06 crc kubenswrapper[4955]: E0202 13:20:06.170095 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5b4597c47b-whqbv_openstack(5634670e-87b2-4c94-a877-853ad21f32b2)\"" pod="openstack/heat-cfnapi-5b4597c47b-whqbv" podUID="5634670e-87b2-4c94-a877-853ad21f32b2" Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.170211 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-75hh6" Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.187927 4955 scope.go:117] "RemoveContainer" containerID="abce91f3f51757305a9f4bd1e85e59522f6cb99b89ca9bcf0b57349db3502787" Feb 02 13:20:06 crc kubenswrapper[4955]: E0202 13:20:06.189175 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8dd674b7b-kd8rg_openstack(247c58e7-931d-4356-a900-da1c877548cd)\"" pod="openstack/heat-api-8dd674b7b-kd8rg" podUID="247c58e7-931d-4356-a900-da1c877548cd" Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.190753 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78c78d7bb-4kb6m" event={"ID":"3b6e5ec6-9488-4be9-852d-defd9556b4ca","Type":"ContainerDied","Data":"f364badde873b340901f5e00c87d7939b208e8d2c9630d64852e64727c7e851a"} Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.191352 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78c78d7bb-4kb6m" Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.217171 4955 scope.go:117] "RemoveContainer" containerID="bbfd992b3f603f3c361f4a632b631e37e3e5bc99c4a57394bab2f722fcd3dc5f" Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.286608 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-78c78d7bb-4kb6m"] Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.301353 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-78c78d7bb-4kb6m"] Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.309431 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-854d558954-hhsmv"] Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.317416 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-854d558954-hhsmv"] Feb 02 13:20:06 crc kubenswrapper[4955]: I0202 13:20:06.921737 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.014731 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-trw5r"] Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.014963 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" podUID="6c56868b-3c42-436b-aa99-89edb4701754" containerName="dnsmasq-dns" containerID="cri-o://e4b32a17a6f8095dc3b112a5c1b7a3419f71a9788828ead6b4227f526e647c32" gracePeriod=10 Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.231079 4955 generic.go:334] "Generic (PLEG): container finished" podID="6c56868b-3c42-436b-aa99-89edb4701754" containerID="e4b32a17a6f8095dc3b112a5c1b7a3419f71a9788828ead6b4227f526e647c32" exitCode=0 Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.233161 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" event={"ID":"6c56868b-3c42-436b-aa99-89edb4701754","Type":"ContainerDied","Data":"e4b32a17a6f8095dc3b112a5c1b7a3419f71a9788828ead6b4227f526e647c32"} Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.616725 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.714988 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc57c\" (UniqueName: \"kubernetes.io/projected/6c56868b-3c42-436b-aa99-89edb4701754-kube-api-access-fc57c\") pod \"6c56868b-3c42-436b-aa99-89edb4701754\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.715927 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-dns-swift-storage-0\") pod \"6c56868b-3c42-436b-aa99-89edb4701754\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.715993 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-config\") pod \"6c56868b-3c42-436b-aa99-89edb4701754\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.716099 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-ovsdbserver-nb\") pod \"6c56868b-3c42-436b-aa99-89edb4701754\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.716129 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-dns-svc\") pod \"6c56868b-3c42-436b-aa99-89edb4701754\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.716150 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-ovsdbserver-sb\") pod \"6c56868b-3c42-436b-aa99-89edb4701754\" (UID: \"6c56868b-3c42-436b-aa99-89edb4701754\") " Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.727767 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c56868b-3c42-436b-aa99-89edb4701754-kube-api-access-fc57c" (OuterVolumeSpecName: "kube-api-access-fc57c") pod "6c56868b-3c42-436b-aa99-89edb4701754" (UID: "6c56868b-3c42-436b-aa99-89edb4701754"). InnerVolumeSpecName "kube-api-access-fc57c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.736305 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b6e5ec6-9488-4be9-852d-defd9556b4ca" path="/var/lib/kubelet/pods/3b6e5ec6-9488-4be9-852d-defd9556b4ca/volumes" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.736849 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56b43419-bf45-4850-996c-276b31e090d3" path="/var/lib/kubelet/pods/56b43419-bf45-4850-996c-276b31e090d3/volumes" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.782423 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6c56868b-3c42-436b-aa99-89edb4701754" (UID: "6c56868b-3c42-436b-aa99-89edb4701754"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.789854 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-config" (OuterVolumeSpecName: "config") pod "6c56868b-3c42-436b-aa99-89edb4701754" (UID: "6c56868b-3c42-436b-aa99-89edb4701754"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.795734 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6c56868b-3c42-436b-aa99-89edb4701754" (UID: "6c56868b-3c42-436b-aa99-89edb4701754"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.815836 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6c56868b-3c42-436b-aa99-89edb4701754" (UID: "6c56868b-3c42-436b-aa99-89edb4701754"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.819864 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.820098 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.820114 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc57c\" (UniqueName: \"kubernetes.io/projected/6c56868b-3c42-436b-aa99-89edb4701754-kube-api-access-fc57c\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.820128 4955 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.820141 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.830936 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6c56868b-3c42-436b-aa99-89edb4701754" (UID: "6c56868b-3c42-436b-aa99-89edb4701754"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.921369 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c56868b-3c42-436b-aa99-89edb4701754-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.937180 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8vmrp"] Feb 02 13:20:07 crc kubenswrapper[4955]: E0202 13:20:07.937575 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6eff7b8-c700-48bd-b71b-0343fca61cc4" containerName="mariadb-account-create-update" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.937586 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6eff7b8-c700-48bd-b71b-0343fca61cc4" containerName="mariadb-account-create-update" Feb 02 13:20:07 crc kubenswrapper[4955]: E0202 13:20:07.937595 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5fb8423-7ae4-4515-8920-72d90de48d8e" containerName="mariadb-database-create" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.937600 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5fb8423-7ae4-4515-8920-72d90de48d8e" containerName="mariadb-database-create" Feb 02 13:20:07 crc kubenswrapper[4955]: E0202 13:20:07.937620 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c56868b-3c42-436b-aa99-89edb4701754" containerName="init" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.937626 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c56868b-3c42-436b-aa99-89edb4701754" containerName="init" Feb 02 13:20:07 crc kubenswrapper[4955]: E0202 13:20:07.937636 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98ea679e-a44a-4bd0-867e-044542b96bbb" containerName="mariadb-database-create" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.937642 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="98ea679e-a44a-4bd0-867e-044542b96bbb" containerName="mariadb-database-create" Feb 02 13:20:07 crc kubenswrapper[4955]: E0202 13:20:07.937653 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67459792-2667-4acf-9ce1-6b715ce15a98" containerName="mariadb-account-create-update" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.937660 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="67459792-2667-4acf-9ce1-6b715ce15a98" containerName="mariadb-account-create-update" Feb 02 13:20:07 crc kubenswrapper[4955]: E0202 13:20:07.937670 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b6e5ec6-9488-4be9-852d-defd9556b4ca" containerName="heat-api" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.937676 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b6e5ec6-9488-4be9-852d-defd9556b4ca" containerName="heat-api" Feb 02 13:20:07 crc kubenswrapper[4955]: E0202 13:20:07.937687 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a17c62e-18bb-4a12-9865-8d38c0b7102f" containerName="mariadb-database-create" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.937693 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a17c62e-18bb-4a12-9865-8d38c0b7102f" containerName="mariadb-database-create" Feb 02 13:20:07 crc kubenswrapper[4955]: E0202 13:20:07.937713 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8faf7286-4893-432d-bbe7-a431158357f9" containerName="mariadb-account-create-update" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.937719 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="8faf7286-4893-432d-bbe7-a431158357f9" containerName="mariadb-account-create-update" Feb 02 13:20:07 crc kubenswrapper[4955]: E0202 13:20:07.937733 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56b43419-bf45-4850-996c-276b31e090d3" containerName="heat-cfnapi" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.937738 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="56b43419-bf45-4850-996c-276b31e090d3" containerName="heat-cfnapi" Feb 02 13:20:07 crc kubenswrapper[4955]: E0202 13:20:07.937750 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c56868b-3c42-436b-aa99-89edb4701754" containerName="dnsmasq-dns" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.937755 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c56868b-3c42-436b-aa99-89edb4701754" containerName="dnsmasq-dns" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.937918 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5fb8423-7ae4-4515-8920-72d90de48d8e" containerName="mariadb-database-create" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.937934 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="98ea679e-a44a-4bd0-867e-044542b96bbb" containerName="mariadb-database-create" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.937975 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="8faf7286-4893-432d-bbe7-a431158357f9" containerName="mariadb-account-create-update" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.937986 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a17c62e-18bb-4a12-9865-8d38c0b7102f" containerName="mariadb-database-create" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.938001 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="67459792-2667-4acf-9ce1-6b715ce15a98" containerName="mariadb-account-create-update" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.938012 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b6e5ec6-9488-4be9-852d-defd9556b4ca" containerName="heat-api" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.938023 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c56868b-3c42-436b-aa99-89edb4701754" containerName="dnsmasq-dns" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.938034 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6eff7b8-c700-48bd-b71b-0343fca61cc4" containerName="mariadb-account-create-update" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.938043 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="56b43419-bf45-4850-996c-276b31e090d3" containerName="heat-cfnapi" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.938621 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8vmrp" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.942774 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.942795 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zshhj" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.943064 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 13:20:07 crc kubenswrapper[4955]: I0202 13:20:07.975649 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8vmrp"] Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.023986 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdwnq\" (UniqueName: \"kubernetes.io/projected/72065269-3b09-46d2-a98d-00f4f38d40a1-kube-api-access-xdwnq\") pod \"nova-cell0-conductor-db-sync-8vmrp\" (UID: \"72065269-3b09-46d2-a98d-00f4f38d40a1\") " pod="openstack/nova-cell0-conductor-db-sync-8vmrp" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.024045 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72065269-3b09-46d2-a98d-00f4f38d40a1-config-data\") pod \"nova-cell0-conductor-db-sync-8vmrp\" (UID: \"72065269-3b09-46d2-a98d-00f4f38d40a1\") " pod="openstack/nova-cell0-conductor-db-sync-8vmrp" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.024224 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72065269-3b09-46d2-a98d-00f4f38d40a1-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8vmrp\" (UID: \"72065269-3b09-46d2-a98d-00f4f38d40a1\") " pod="openstack/nova-cell0-conductor-db-sync-8vmrp" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.024271 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72065269-3b09-46d2-a98d-00f4f38d40a1-scripts\") pod \"nova-cell0-conductor-db-sync-8vmrp\" (UID: \"72065269-3b09-46d2-a98d-00f4f38d40a1\") " pod="openstack/nova-cell0-conductor-db-sync-8vmrp" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.125922 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdwnq\" (UniqueName: \"kubernetes.io/projected/72065269-3b09-46d2-a98d-00f4f38d40a1-kube-api-access-xdwnq\") pod \"nova-cell0-conductor-db-sync-8vmrp\" (UID: \"72065269-3b09-46d2-a98d-00f4f38d40a1\") " pod="openstack/nova-cell0-conductor-db-sync-8vmrp" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.125981 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72065269-3b09-46d2-a98d-00f4f38d40a1-config-data\") pod \"nova-cell0-conductor-db-sync-8vmrp\" (UID: \"72065269-3b09-46d2-a98d-00f4f38d40a1\") " pod="openstack/nova-cell0-conductor-db-sync-8vmrp" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.126050 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72065269-3b09-46d2-a98d-00f4f38d40a1-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8vmrp\" (UID: \"72065269-3b09-46d2-a98d-00f4f38d40a1\") " pod="openstack/nova-cell0-conductor-db-sync-8vmrp" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.126239 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72065269-3b09-46d2-a98d-00f4f38d40a1-scripts\") pod \"nova-cell0-conductor-db-sync-8vmrp\" (UID: \"72065269-3b09-46d2-a98d-00f4f38d40a1\") " pod="openstack/nova-cell0-conductor-db-sync-8vmrp" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.129979 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72065269-3b09-46d2-a98d-00f4f38d40a1-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8vmrp\" (UID: \"72065269-3b09-46d2-a98d-00f4f38d40a1\") " pod="openstack/nova-cell0-conductor-db-sync-8vmrp" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.130594 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72065269-3b09-46d2-a98d-00f4f38d40a1-scripts\") pod \"nova-cell0-conductor-db-sync-8vmrp\" (UID: \"72065269-3b09-46d2-a98d-00f4f38d40a1\") " pod="openstack/nova-cell0-conductor-db-sync-8vmrp" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.131486 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72065269-3b09-46d2-a98d-00f4f38d40a1-config-data\") pod \"nova-cell0-conductor-db-sync-8vmrp\" (UID: \"72065269-3b09-46d2-a98d-00f4f38d40a1\") " pod="openstack/nova-cell0-conductor-db-sync-8vmrp" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.151769 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdwnq\" (UniqueName: \"kubernetes.io/projected/72065269-3b09-46d2-a98d-00f4f38d40a1-kube-api-access-xdwnq\") pod \"nova-cell0-conductor-db-sync-8vmrp\" (UID: \"72065269-3b09-46d2-a98d-00f4f38d40a1\") " pod="openstack/nova-cell0-conductor-db-sync-8vmrp" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.256934 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8vmrp" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.304216 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" event={"ID":"6c56868b-3c42-436b-aa99-89edb4701754","Type":"ContainerDied","Data":"e72ff40a9ae50f0bf4c57e35f7e6369c0621f7c8c6c4a8edd5cee71a94c3d650"} Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.304617 4955 scope.go:117] "RemoveContainer" containerID="e4b32a17a6f8095dc3b112a5c1b7a3419f71a9788828ead6b4227f526e647c32" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.304870 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.314305 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"587dcc9e-137d-4dd7-9a87-024e6d09b1e1","Type":"ContainerStarted","Data":"594769e9d9fbb48be51116e4ac6a7d051719dfd72133ab1d5b97d49073856ae0"} Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.314524 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="587dcc9e-137d-4dd7-9a87-024e6d09b1e1" containerName="ceilometer-central-agent" containerID="cri-o://2a3acac6e42104ce09516cfd11130e1d0a173b157625db8902fe1784bacb3136" gracePeriod=30 Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.314771 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.315343 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="587dcc9e-137d-4dd7-9a87-024e6d09b1e1" containerName="sg-core" containerID="cri-o://5d867ecf4bf9af8c5f7b065eaef6d3d43eec9bcf0679f70e2ab8f6646c30abdb" gracePeriod=30 Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.315457 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="587dcc9e-137d-4dd7-9a87-024e6d09b1e1" containerName="proxy-httpd" containerID="cri-o://594769e9d9fbb48be51116e4ac6a7d051719dfd72133ab1d5b97d49073856ae0" gracePeriod=30 Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.315399 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="587dcc9e-137d-4dd7-9a87-024e6d09b1e1" containerName="ceilometer-notification-agent" containerID="cri-o://66b2ef6027d020f3a4c0868f677be03821d6fa81483c5f7060d390a8c90c398c" gracePeriod=30 Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.356756 4955 scope.go:117] "RemoveContainer" containerID="ee558c3f627276eb2b293236ce8ecad441d2f566a713f4513332f41536ff8516" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.386032 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.528587526 podStartE2EDuration="11.386009385s" podCreationTimestamp="2026-02-02 13:19:57 +0000 UTC" firstStartedPulling="2026-02-02 13:19:59.132649227 +0000 UTC m=+1050.044985677" lastFinishedPulling="2026-02-02 13:20:06.990071086 +0000 UTC m=+1057.902407536" observedRunningTime="2026-02-02 13:20:08.385219215 +0000 UTC m=+1059.297555665" watchObservedRunningTime="2026-02-02 13:20:08.386009385 +0000 UTC m=+1059.298345825" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.464629 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-trw5r"] Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.512994 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-trw5r"] Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.551725 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-8dd674b7b-kd8rg" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.552113 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-8dd674b7b-kd8rg" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.552938 4955 scope.go:117] "RemoveContainer" containerID="abce91f3f51757305a9f4bd1e85e59522f6cb99b89ca9bcf0b57349db3502787" Feb 02 13:20:08 crc kubenswrapper[4955]: E0202 13:20:08.553276 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8dd674b7b-kd8rg_openstack(247c58e7-931d-4356-a900-da1c877548cd)\"" pod="openstack/heat-api-8dd674b7b-kd8rg" podUID="247c58e7-931d-4356-a900-da1c877548cd" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.576178 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5b4597c47b-whqbv" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.576671 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-5b4597c47b-whqbv" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.576859 4955 scope.go:117] "RemoveContainer" containerID="725a9eb80f6bef0f30854f3da20591d4448d23a1428e611902dcd3573f9d6356" Feb 02 13:20:08 crc kubenswrapper[4955]: E0202 13:20:08.577114 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5b4597c47b-whqbv_openstack(5634670e-87b2-4c94-a877-853ad21f32b2)\"" pod="openstack/heat-cfnapi-5b4597c47b-whqbv" podUID="5634670e-87b2-4c94-a877-853ad21f32b2" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.846989 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.847051 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.889524 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.902045 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 13:20:08 crc kubenswrapper[4955]: I0202 13:20:08.904087 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8vmrp"] Feb 02 13:20:08 crc kubenswrapper[4955]: W0202 13:20:08.908236 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72065269_3b09_46d2_a98d_00f4f38d40a1.slice/crio-7fea2abc6b63e2474929975ff3ce3bd81ae3527860a2d74f81d974c49fad2a15 WatchSource:0}: Error finding container 7fea2abc6b63e2474929975ff3ce3bd81ae3527860a2d74f81d974c49fad2a15: Status 404 returned error can't find the container with id 7fea2abc6b63e2474929975ff3ce3bd81ae3527860a2d74f81d974c49fad2a15 Feb 02 13:20:09 crc kubenswrapper[4955]: I0202 13:20:09.326201 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8vmrp" event={"ID":"72065269-3b09-46d2-a98d-00f4f38d40a1","Type":"ContainerStarted","Data":"7fea2abc6b63e2474929975ff3ce3bd81ae3527860a2d74f81d974c49fad2a15"} Feb 02 13:20:09 crc kubenswrapper[4955]: I0202 13:20:09.329097 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 13:20:09 crc kubenswrapper[4955]: I0202 13:20:09.329155 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 13:20:09 crc kubenswrapper[4955]: I0202 13:20:09.333444 4955 generic.go:334] "Generic (PLEG): container finished" podID="587dcc9e-137d-4dd7-9a87-024e6d09b1e1" containerID="594769e9d9fbb48be51116e4ac6a7d051719dfd72133ab1d5b97d49073856ae0" exitCode=0 Feb 02 13:20:09 crc kubenswrapper[4955]: I0202 13:20:09.333475 4955 generic.go:334] "Generic (PLEG): container finished" podID="587dcc9e-137d-4dd7-9a87-024e6d09b1e1" containerID="5d867ecf4bf9af8c5f7b065eaef6d3d43eec9bcf0679f70e2ab8f6646c30abdb" exitCode=2 Feb 02 13:20:09 crc kubenswrapper[4955]: I0202 13:20:09.333482 4955 generic.go:334] "Generic (PLEG): container finished" podID="587dcc9e-137d-4dd7-9a87-024e6d09b1e1" containerID="66b2ef6027d020f3a4c0868f677be03821d6fa81483c5f7060d390a8c90c398c" exitCode=0 Feb 02 13:20:09 crc kubenswrapper[4955]: I0202 13:20:09.333523 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"587dcc9e-137d-4dd7-9a87-024e6d09b1e1","Type":"ContainerDied","Data":"594769e9d9fbb48be51116e4ac6a7d051719dfd72133ab1d5b97d49073856ae0"} Feb 02 13:20:09 crc kubenswrapper[4955]: I0202 13:20:09.333576 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"587dcc9e-137d-4dd7-9a87-024e6d09b1e1","Type":"ContainerDied","Data":"5d867ecf4bf9af8c5f7b065eaef6d3d43eec9bcf0679f70e2ab8f6646c30abdb"} Feb 02 13:20:09 crc kubenswrapper[4955]: I0202 13:20:09.333592 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"587dcc9e-137d-4dd7-9a87-024e6d09b1e1","Type":"ContainerDied","Data":"66b2ef6027d020f3a4c0868f677be03821d6fa81483c5f7060d390a8c90c398c"} Feb 02 13:20:09 crc kubenswrapper[4955]: I0202 13:20:09.334067 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 13:20:09 crc kubenswrapper[4955]: I0202 13:20:09.334089 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 13:20:09 crc kubenswrapper[4955]: I0202 13:20:09.334524 4955 scope.go:117] "RemoveContainer" containerID="725a9eb80f6bef0f30854f3da20591d4448d23a1428e611902dcd3573f9d6356" Feb 02 13:20:09 crc kubenswrapper[4955]: E0202 13:20:09.334790 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5b4597c47b-whqbv_openstack(5634670e-87b2-4c94-a877-853ad21f32b2)\"" pod="openstack/heat-cfnapi-5b4597c47b-whqbv" podUID="5634670e-87b2-4c94-a877-853ad21f32b2" Feb 02 13:20:09 crc kubenswrapper[4955]: I0202 13:20:09.365411 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 13:20:09 crc kubenswrapper[4955]: I0202 13:20:09.375833 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 13:20:09 crc kubenswrapper[4955]: I0202 13:20:09.747211 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c56868b-3c42-436b-aa99-89edb4701754" path="/var/lib/kubelet/pods/6c56868b-3c42-436b-aa99-89edb4701754/volumes" Feb 02 13:20:10 crc kubenswrapper[4955]: I0202 13:20:10.355059 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 13:20:10 crc kubenswrapper[4955]: I0202 13:20:10.355093 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 13:20:11 crc kubenswrapper[4955]: I0202 13:20:11.585322 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 13:20:11 crc kubenswrapper[4955]: I0202 13:20:11.590851 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 13:20:11 crc kubenswrapper[4955]: I0202 13:20:11.838398 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6d8c5fddf-xssbs" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.143830 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.274212 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-log-httpd\") pod \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.274293 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff24f\" (UniqueName: \"kubernetes.io/projected/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-kube-api-access-ff24f\") pod \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.274381 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-sg-core-conf-yaml\") pod \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.274437 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-combined-ca-bundle\") pod \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.274457 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-config-data\") pod \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.274650 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-run-httpd\") pod \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.274676 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-scripts\") pod \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\" (UID: \"587dcc9e-137d-4dd7-9a87-024e6d09b1e1\") " Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.275584 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "587dcc9e-137d-4dd7-9a87-024e6d09b1e1" (UID: "587dcc9e-137d-4dd7-9a87-024e6d09b1e1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.275944 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "587dcc9e-137d-4dd7-9a87-024e6d09b1e1" (UID: "587dcc9e-137d-4dd7-9a87-024e6d09b1e1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.315257 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-kube-api-access-ff24f" (OuterVolumeSpecName: "kube-api-access-ff24f") pod "587dcc9e-137d-4dd7-9a87-024e6d09b1e1" (UID: "587dcc9e-137d-4dd7-9a87-024e6d09b1e1"). InnerVolumeSpecName "kube-api-access-ff24f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.318016 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-scripts" (OuterVolumeSpecName: "scripts") pod "587dcc9e-137d-4dd7-9a87-024e6d09b1e1" (UID: "587dcc9e-137d-4dd7-9a87-024e6d09b1e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.320935 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-trw5r" podUID="6c56868b-3c42-436b-aa99-89edb4701754" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: i/o timeout" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.366918 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "587dcc9e-137d-4dd7-9a87-024e6d09b1e1" (UID: "587dcc9e-137d-4dd7-9a87-024e6d09b1e1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.380366 4955 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.380702 4955 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.380784 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.380856 4955 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.380930 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff24f\" (UniqueName: \"kubernetes.io/projected/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-kube-api-access-ff24f\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.401900 4955 generic.go:334] "Generic (PLEG): container finished" podID="587dcc9e-137d-4dd7-9a87-024e6d09b1e1" containerID="2a3acac6e42104ce09516cfd11130e1d0a173b157625db8902fe1784bacb3136" exitCode=0 Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.403326 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.403713 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"587dcc9e-137d-4dd7-9a87-024e6d09b1e1","Type":"ContainerDied","Data":"2a3acac6e42104ce09516cfd11130e1d0a173b157625db8902fe1784bacb3136"} Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.403780 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"587dcc9e-137d-4dd7-9a87-024e6d09b1e1","Type":"ContainerDied","Data":"4bc7c2809b10893b72efa770d01208306e8fc54261efdd8dac09e8ba90c16b41"} Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.403804 4955 scope.go:117] "RemoveContainer" containerID="594769e9d9fbb48be51116e4ac6a7d051719dfd72133ab1d5b97d49073856ae0" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.404062 4955 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.404099 4955 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.414697 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "587dcc9e-137d-4dd7-9a87-024e6d09b1e1" (UID: "587dcc9e-137d-4dd7-9a87-024e6d09b1e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.461694 4955 scope.go:117] "RemoveContainer" containerID="5d867ecf4bf9af8c5f7b065eaef6d3d43eec9bcf0679f70e2ab8f6646c30abdb" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.482572 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.497869 4955 scope.go:117] "RemoveContainer" containerID="66b2ef6027d020f3a4c0868f677be03821d6fa81483c5f7060d390a8c90c398c" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.526484 4955 scope.go:117] "RemoveContainer" containerID="2a3acac6e42104ce09516cfd11130e1d0a173b157625db8902fe1784bacb3136" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.530334 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-config-data" (OuterVolumeSpecName: "config-data") pod "587dcc9e-137d-4dd7-9a87-024e6d09b1e1" (UID: "587dcc9e-137d-4dd7-9a87-024e6d09b1e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.561832 4955 scope.go:117] "RemoveContainer" containerID="594769e9d9fbb48be51116e4ac6a7d051719dfd72133ab1d5b97d49073856ae0" Feb 02 13:20:12 crc kubenswrapper[4955]: E0202 13:20:12.563997 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"594769e9d9fbb48be51116e4ac6a7d051719dfd72133ab1d5b97d49073856ae0\": container with ID starting with 594769e9d9fbb48be51116e4ac6a7d051719dfd72133ab1d5b97d49073856ae0 not found: ID does not exist" containerID="594769e9d9fbb48be51116e4ac6a7d051719dfd72133ab1d5b97d49073856ae0" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.564047 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"594769e9d9fbb48be51116e4ac6a7d051719dfd72133ab1d5b97d49073856ae0"} err="failed to get container status \"594769e9d9fbb48be51116e4ac6a7d051719dfd72133ab1d5b97d49073856ae0\": rpc error: code = NotFound desc = could not find container \"594769e9d9fbb48be51116e4ac6a7d051719dfd72133ab1d5b97d49073856ae0\": container with ID starting with 594769e9d9fbb48be51116e4ac6a7d051719dfd72133ab1d5b97d49073856ae0 not found: ID does not exist" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.564081 4955 scope.go:117] "RemoveContainer" containerID="5d867ecf4bf9af8c5f7b065eaef6d3d43eec9bcf0679f70e2ab8f6646c30abdb" Feb 02 13:20:12 crc kubenswrapper[4955]: E0202 13:20:12.565134 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d867ecf4bf9af8c5f7b065eaef6d3d43eec9bcf0679f70e2ab8f6646c30abdb\": container with ID starting with 5d867ecf4bf9af8c5f7b065eaef6d3d43eec9bcf0679f70e2ab8f6646c30abdb not found: ID does not exist" containerID="5d867ecf4bf9af8c5f7b065eaef6d3d43eec9bcf0679f70e2ab8f6646c30abdb" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.565189 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d867ecf4bf9af8c5f7b065eaef6d3d43eec9bcf0679f70e2ab8f6646c30abdb"} err="failed to get container status \"5d867ecf4bf9af8c5f7b065eaef6d3d43eec9bcf0679f70e2ab8f6646c30abdb\": rpc error: code = NotFound desc = could not find container \"5d867ecf4bf9af8c5f7b065eaef6d3d43eec9bcf0679f70e2ab8f6646c30abdb\": container with ID starting with 5d867ecf4bf9af8c5f7b065eaef6d3d43eec9bcf0679f70e2ab8f6646c30abdb not found: ID does not exist" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.565225 4955 scope.go:117] "RemoveContainer" containerID="66b2ef6027d020f3a4c0868f677be03821d6fa81483c5f7060d390a8c90c398c" Feb 02 13:20:12 crc kubenswrapper[4955]: E0202 13:20:12.566070 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66b2ef6027d020f3a4c0868f677be03821d6fa81483c5f7060d390a8c90c398c\": container with ID starting with 66b2ef6027d020f3a4c0868f677be03821d6fa81483c5f7060d390a8c90c398c not found: ID does not exist" containerID="66b2ef6027d020f3a4c0868f677be03821d6fa81483c5f7060d390a8c90c398c" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.566189 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66b2ef6027d020f3a4c0868f677be03821d6fa81483c5f7060d390a8c90c398c"} err="failed to get container status \"66b2ef6027d020f3a4c0868f677be03821d6fa81483c5f7060d390a8c90c398c\": rpc error: code = NotFound desc = could not find container \"66b2ef6027d020f3a4c0868f677be03821d6fa81483c5f7060d390a8c90c398c\": container with ID starting with 66b2ef6027d020f3a4c0868f677be03821d6fa81483c5f7060d390a8c90c398c not found: ID does not exist" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.566286 4955 scope.go:117] "RemoveContainer" containerID="2a3acac6e42104ce09516cfd11130e1d0a173b157625db8902fe1784bacb3136" Feb 02 13:20:12 crc kubenswrapper[4955]: E0202 13:20:12.568901 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a3acac6e42104ce09516cfd11130e1d0a173b157625db8902fe1784bacb3136\": container with ID starting with 2a3acac6e42104ce09516cfd11130e1d0a173b157625db8902fe1784bacb3136 not found: ID does not exist" containerID="2a3acac6e42104ce09516cfd11130e1d0a173b157625db8902fe1784bacb3136" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.568937 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a3acac6e42104ce09516cfd11130e1d0a173b157625db8902fe1784bacb3136"} err="failed to get container status \"2a3acac6e42104ce09516cfd11130e1d0a173b157625db8902fe1784bacb3136\": rpc error: code = NotFound desc = could not find container \"2a3acac6e42104ce09516cfd11130e1d0a173b157625db8902fe1784bacb3136\": container with ID starting with 2a3acac6e42104ce09516cfd11130e1d0a173b157625db8902fe1784bacb3136 not found: ID does not exist" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.584690 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/587dcc9e-137d-4dd7-9a87-024e6d09b1e1-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.771894 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.794963 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.812674 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:20:12 crc kubenswrapper[4955]: E0202 13:20:12.813076 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="587dcc9e-137d-4dd7-9a87-024e6d09b1e1" containerName="ceilometer-central-agent" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.813092 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="587dcc9e-137d-4dd7-9a87-024e6d09b1e1" containerName="ceilometer-central-agent" Feb 02 13:20:12 crc kubenswrapper[4955]: E0202 13:20:12.813102 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="587dcc9e-137d-4dd7-9a87-024e6d09b1e1" containerName="ceilometer-notification-agent" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.813110 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="587dcc9e-137d-4dd7-9a87-024e6d09b1e1" containerName="ceilometer-notification-agent" Feb 02 13:20:12 crc kubenswrapper[4955]: E0202 13:20:12.813138 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="587dcc9e-137d-4dd7-9a87-024e6d09b1e1" containerName="sg-core" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.813146 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="587dcc9e-137d-4dd7-9a87-024e6d09b1e1" containerName="sg-core" Feb 02 13:20:12 crc kubenswrapper[4955]: E0202 13:20:12.813162 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="587dcc9e-137d-4dd7-9a87-024e6d09b1e1" containerName="proxy-httpd" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.813168 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="587dcc9e-137d-4dd7-9a87-024e6d09b1e1" containerName="proxy-httpd" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.813442 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="587dcc9e-137d-4dd7-9a87-024e6d09b1e1" containerName="sg-core" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.813457 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="587dcc9e-137d-4dd7-9a87-024e6d09b1e1" containerName="ceilometer-notification-agent" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.813473 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="587dcc9e-137d-4dd7-9a87-024e6d09b1e1" containerName="ceilometer-central-agent" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.813483 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="587dcc9e-137d-4dd7-9a87-024e6d09b1e1" containerName="proxy-httpd" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.825298 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.825404 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.828325 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.828509 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.890174 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-config-data\") pod \"ceilometer-0\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " pod="openstack/ceilometer-0" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.890910 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d60f9606-d4b3-4191-9966-53e71096871c-log-httpd\") pod \"ceilometer-0\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " pod="openstack/ceilometer-0" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.891000 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " pod="openstack/ceilometer-0" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.891067 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-scripts\") pod \"ceilometer-0\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " pod="openstack/ceilometer-0" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.891126 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smcrd\" (UniqueName: \"kubernetes.io/projected/d60f9606-d4b3-4191-9966-53e71096871c-kube-api-access-smcrd\") pod \"ceilometer-0\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " pod="openstack/ceilometer-0" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.891224 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " pod="openstack/ceilometer-0" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.891380 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d60f9606-d4b3-4191-9966-53e71096871c-run-httpd\") pod \"ceilometer-0\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " pod="openstack/ceilometer-0" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.993473 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d60f9606-d4b3-4191-9966-53e71096871c-run-httpd\") pod \"ceilometer-0\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " pod="openstack/ceilometer-0" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.993584 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-config-data\") pod \"ceilometer-0\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " pod="openstack/ceilometer-0" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.993657 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d60f9606-d4b3-4191-9966-53e71096871c-log-httpd\") pod \"ceilometer-0\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " pod="openstack/ceilometer-0" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.993701 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " pod="openstack/ceilometer-0" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.993740 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-scripts\") pod \"ceilometer-0\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " pod="openstack/ceilometer-0" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.993779 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smcrd\" (UniqueName: \"kubernetes.io/projected/d60f9606-d4b3-4191-9966-53e71096871c-kube-api-access-smcrd\") pod \"ceilometer-0\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " pod="openstack/ceilometer-0" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.993854 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " pod="openstack/ceilometer-0" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.994160 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d60f9606-d4b3-4191-9966-53e71096871c-log-httpd\") pod \"ceilometer-0\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " pod="openstack/ceilometer-0" Feb 02 13:20:12 crc kubenswrapper[4955]: I0202 13:20:12.994916 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d60f9606-d4b3-4191-9966-53e71096871c-run-httpd\") pod \"ceilometer-0\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " pod="openstack/ceilometer-0" Feb 02 13:20:13 crc kubenswrapper[4955]: I0202 13:20:13.007663 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " pod="openstack/ceilometer-0" Feb 02 13:20:13 crc kubenswrapper[4955]: I0202 13:20:13.011439 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-scripts\") pod \"ceilometer-0\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " pod="openstack/ceilometer-0" Feb 02 13:20:13 crc kubenswrapper[4955]: I0202 13:20:13.014904 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " pod="openstack/ceilometer-0" Feb 02 13:20:13 crc kubenswrapper[4955]: I0202 13:20:13.021467 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smcrd\" (UniqueName: \"kubernetes.io/projected/d60f9606-d4b3-4191-9966-53e71096871c-kube-api-access-smcrd\") pod \"ceilometer-0\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " pod="openstack/ceilometer-0" Feb 02 13:20:13 crc kubenswrapper[4955]: I0202 13:20:13.021802 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-config-data\") pod \"ceilometer-0\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " pod="openstack/ceilometer-0" Feb 02 13:20:13 crc kubenswrapper[4955]: I0202 13:20:13.158071 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:20:13 crc kubenswrapper[4955]: I0202 13:20:13.323267 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 13:20:13 crc kubenswrapper[4955]: I0202 13:20:13.333458 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 13:20:13 crc kubenswrapper[4955]: I0202 13:20:13.707027 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-59594c5dbd-8r97t" Feb 02 13:20:13 crc kubenswrapper[4955]: I0202 13:20:13.708143 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-77c488fc4f-5xxjg" Feb 02 13:20:13 crc kubenswrapper[4955]: I0202 13:20:13.749901 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="587dcc9e-137d-4dd7-9a87-024e6d09b1e1" path="/var/lib/kubelet/pods/587dcc9e-137d-4dd7-9a87-024e6d09b1e1/volumes" Feb 02 13:20:13 crc kubenswrapper[4955]: I0202 13:20:13.835728 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-8dd674b7b-kd8rg"] Feb 02 13:20:13 crc kubenswrapper[4955]: I0202 13:20:13.872488 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5b4597c47b-whqbv"] Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.444894 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8dd674b7b-kd8rg" Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.446622 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5b4597c47b-whqbv" Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.508371 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8dd674b7b-kd8rg" event={"ID":"247c58e7-931d-4356-a900-da1c877548cd","Type":"ContainerDied","Data":"ec95f2640256f585f08389cfd2a0b516bdec0d2135fedc2c9e62767746afdb54"} Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.508803 4955 scope.go:117] "RemoveContainer" containerID="abce91f3f51757305a9f4bd1e85e59522f6cb99b89ca9bcf0b57349db3502787" Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.509005 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8dd674b7b-kd8rg" Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.517748 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5b4597c47b-whqbv" Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.518242 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5b4597c47b-whqbv" event={"ID":"5634670e-87b2-4c94-a877-853ad21f32b2","Type":"ContainerDied","Data":"c2e14183e82e05f4ec2d9450493dac18f3e2ccd3ea2f9fb18127ff1d36930ef9"} Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.544897 4955 scope.go:117] "RemoveContainer" containerID="725a9eb80f6bef0f30854f3da20591d4448d23a1428e611902dcd3573f9d6356" Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.561338 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5634670e-87b2-4c94-a877-853ad21f32b2-combined-ca-bundle\") pod \"5634670e-87b2-4c94-a877-853ad21f32b2\" (UID: \"5634670e-87b2-4c94-a877-853ad21f32b2\") " Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.561402 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srrgm\" (UniqueName: \"kubernetes.io/projected/5634670e-87b2-4c94-a877-853ad21f32b2-kube-api-access-srrgm\") pod \"5634670e-87b2-4c94-a877-853ad21f32b2\" (UID: \"5634670e-87b2-4c94-a877-853ad21f32b2\") " Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.561488 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/247c58e7-931d-4356-a900-da1c877548cd-combined-ca-bundle\") pod \"247c58e7-931d-4356-a900-da1c877548cd\" (UID: \"247c58e7-931d-4356-a900-da1c877548cd\") " Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.561576 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/247c58e7-931d-4356-a900-da1c877548cd-config-data-custom\") pod \"247c58e7-931d-4356-a900-da1c877548cd\" (UID: \"247c58e7-931d-4356-a900-da1c877548cd\") " Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.561622 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5634670e-87b2-4c94-a877-853ad21f32b2-config-data-custom\") pod \"5634670e-87b2-4c94-a877-853ad21f32b2\" (UID: \"5634670e-87b2-4c94-a877-853ad21f32b2\") " Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.561678 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf5jl\" (UniqueName: \"kubernetes.io/projected/247c58e7-931d-4356-a900-da1c877548cd-kube-api-access-gf5jl\") pod \"247c58e7-931d-4356-a900-da1c877548cd\" (UID: \"247c58e7-931d-4356-a900-da1c877548cd\") " Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.561706 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5634670e-87b2-4c94-a877-853ad21f32b2-config-data\") pod \"5634670e-87b2-4c94-a877-853ad21f32b2\" (UID: \"5634670e-87b2-4c94-a877-853ad21f32b2\") " Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.561731 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/247c58e7-931d-4356-a900-da1c877548cd-config-data\") pod \"247c58e7-931d-4356-a900-da1c877548cd\" (UID: \"247c58e7-931d-4356-a900-da1c877548cd\") " Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.569739 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/247c58e7-931d-4356-a900-da1c877548cd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "247c58e7-931d-4356-a900-da1c877548cd" (UID: "247c58e7-931d-4356-a900-da1c877548cd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.571984 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/247c58e7-931d-4356-a900-da1c877548cd-kube-api-access-gf5jl" (OuterVolumeSpecName: "kube-api-access-gf5jl") pod "247c58e7-931d-4356-a900-da1c877548cd" (UID: "247c58e7-931d-4356-a900-da1c877548cd"). InnerVolumeSpecName "kube-api-access-gf5jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.572770 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5634670e-87b2-4c94-a877-853ad21f32b2-kube-api-access-srrgm" (OuterVolumeSpecName: "kube-api-access-srrgm") pod "5634670e-87b2-4c94-a877-853ad21f32b2" (UID: "5634670e-87b2-4c94-a877-853ad21f32b2"). InnerVolumeSpecName "kube-api-access-srrgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.573172 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5634670e-87b2-4c94-a877-853ad21f32b2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5634670e-87b2-4c94-a877-853ad21f32b2" (UID: "5634670e-87b2-4c94-a877-853ad21f32b2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.616196 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/247c58e7-931d-4356-a900-da1c877548cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "247c58e7-931d-4356-a900-da1c877548cd" (UID: "247c58e7-931d-4356-a900-da1c877548cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.626877 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5634670e-87b2-4c94-a877-853ad21f32b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5634670e-87b2-4c94-a877-853ad21f32b2" (UID: "5634670e-87b2-4c94-a877-853ad21f32b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.642835 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/247c58e7-931d-4356-a900-da1c877548cd-config-data" (OuterVolumeSpecName: "config-data") pod "247c58e7-931d-4356-a900-da1c877548cd" (UID: "247c58e7-931d-4356-a900-da1c877548cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.649059 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5634670e-87b2-4c94-a877-853ad21f32b2-config-data" (OuterVolumeSpecName: "config-data") pod "5634670e-87b2-4c94-a877-853ad21f32b2" (UID: "5634670e-87b2-4c94-a877-853ad21f32b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.664721 4955 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/247c58e7-931d-4356-a900-da1c877548cd-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.664750 4955 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5634670e-87b2-4c94-a877-853ad21f32b2-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.664762 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf5jl\" (UniqueName: \"kubernetes.io/projected/247c58e7-931d-4356-a900-da1c877548cd-kube-api-access-gf5jl\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.664772 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5634670e-87b2-4c94-a877-853ad21f32b2-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.664781 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/247c58e7-931d-4356-a900-da1c877548cd-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.664789 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5634670e-87b2-4c94-a877-853ad21f32b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.664797 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srrgm\" (UniqueName: \"kubernetes.io/projected/5634670e-87b2-4c94-a877-853ad21f32b2-kube-api-access-srrgm\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.664806 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/247c58e7-931d-4356-a900-da1c877548cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.795122 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.846773 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-8dd674b7b-kd8rg"] Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.859246 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-8dd674b7b-kd8rg"] Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.871213 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5b4597c47b-whqbv"] Feb 02 13:20:14 crc kubenswrapper[4955]: I0202 13:20:14.881220 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-5b4597c47b-whqbv"] Feb 02 13:20:15 crc kubenswrapper[4955]: I0202 13:20:15.729365 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="247c58e7-931d-4356-a900-da1c877548cd" path="/var/lib/kubelet/pods/247c58e7-931d-4356-a900-da1c877548cd/volumes" Feb 02 13:20:15 crc kubenswrapper[4955]: I0202 13:20:15.730387 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5634670e-87b2-4c94-a877-853ad21f32b2" path="/var/lib/kubelet/pods/5634670e-87b2-4c94-a877-853ad21f32b2/volumes" Feb 02 13:20:16 crc kubenswrapper[4955]: I0202 13:20:16.653767 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:20:18 crc kubenswrapper[4955]: I0202 13:20:18.556776 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-9585f6f46-wl58s" Feb 02 13:20:18 crc kubenswrapper[4955]: I0202 13:20:18.613650 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6d8c5fddf-xssbs"] Feb 02 13:20:18 crc kubenswrapper[4955]: I0202 13:20:18.614287 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-6d8c5fddf-xssbs" podUID="e4433372-00c8-4e01-8813-4fed0ea54158" containerName="heat-engine" containerID="cri-o://deff51c1dfe6f36833195bc2ecc18691189dcd83c0edf560baaa7a9aff6f58ab" gracePeriod=60 Feb 02 13:20:21 crc kubenswrapper[4955]: E0202 13:20:21.787818 4955 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="deff51c1dfe6f36833195bc2ecc18691189dcd83c0edf560baaa7a9aff6f58ab" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 02 13:20:21 crc kubenswrapper[4955]: E0202 13:20:21.789523 4955 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="deff51c1dfe6f36833195bc2ecc18691189dcd83c0edf560baaa7a9aff6f58ab" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 02 13:20:21 crc kubenswrapper[4955]: E0202 13:20:21.799942 4955 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="deff51c1dfe6f36833195bc2ecc18691189dcd83c0edf560baaa7a9aff6f58ab" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 02 13:20:21 crc kubenswrapper[4955]: E0202 13:20:21.799993 4955 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6d8c5fddf-xssbs" podUID="e4433372-00c8-4e01-8813-4fed0ea54158" containerName="heat-engine" Feb 02 13:20:24 crc kubenswrapper[4955]: W0202 13:20:24.122158 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd60f9606_d4b3_4191_9966_53e71096871c.slice/crio-4c842f90362733d0533e70ab829fe6e310a4762f560d304c4ac6ce561ce7e3e1 WatchSource:0}: Error finding container 4c842f90362733d0533e70ab829fe6e310a4762f560d304c4ac6ce561ce7e3e1: Status 404 returned error can't find the container with id 4c842f90362733d0533e70ab829fe6e310a4762f560d304c4ac6ce561ce7e3e1 Feb 02 13:20:24 crc kubenswrapper[4955]: E0202 13:20:24.151282 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Feb 02 13:20:24 crc kubenswrapper[4955]: E0202 13:20:24.151446 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdwnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-8vmrp_openstack(72065269-3b09-46d2-a98d-00f4f38d40a1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:20:24 crc kubenswrapper[4955]: E0202 13:20:24.152636 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-8vmrp" podUID="72065269-3b09-46d2-a98d-00f4f38d40a1" Feb 02 13:20:24 crc kubenswrapper[4955]: I0202 13:20:24.622925 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d60f9606-d4b3-4191-9966-53e71096871c","Type":"ContainerStarted","Data":"4c842f90362733d0533e70ab829fe6e310a4762f560d304c4ac6ce561ce7e3e1"} Feb 02 13:20:24 crc kubenswrapper[4955]: E0202 13:20:24.631190 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-8vmrp" podUID="72065269-3b09-46d2-a98d-00f4f38d40a1" Feb 02 13:20:25 crc kubenswrapper[4955]: I0202 13:20:25.632281 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d60f9606-d4b3-4191-9966-53e71096871c","Type":"ContainerStarted","Data":"733e7fca041b5538779909c17a99e257d114c752a36bcc891d015644b8f1c4df"} Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.110541 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6d8c5fddf-xssbs" Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.212534 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4433372-00c8-4e01-8813-4fed0ea54158-config-data-custom\") pod \"e4433372-00c8-4e01-8813-4fed0ea54158\" (UID: \"e4433372-00c8-4e01-8813-4fed0ea54158\") " Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.212661 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4433372-00c8-4e01-8813-4fed0ea54158-combined-ca-bundle\") pod \"e4433372-00c8-4e01-8813-4fed0ea54158\" (UID: \"e4433372-00c8-4e01-8813-4fed0ea54158\") " Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.212744 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6w5r\" (UniqueName: \"kubernetes.io/projected/e4433372-00c8-4e01-8813-4fed0ea54158-kube-api-access-q6w5r\") pod \"e4433372-00c8-4e01-8813-4fed0ea54158\" (UID: \"e4433372-00c8-4e01-8813-4fed0ea54158\") " Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.212816 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4433372-00c8-4e01-8813-4fed0ea54158-config-data\") pod \"e4433372-00c8-4e01-8813-4fed0ea54158\" (UID: \"e4433372-00c8-4e01-8813-4fed0ea54158\") " Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.219461 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4433372-00c8-4e01-8813-4fed0ea54158-kube-api-access-q6w5r" (OuterVolumeSpecName: "kube-api-access-q6w5r") pod "e4433372-00c8-4e01-8813-4fed0ea54158" (UID: "e4433372-00c8-4e01-8813-4fed0ea54158"). InnerVolumeSpecName "kube-api-access-q6w5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.220299 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4433372-00c8-4e01-8813-4fed0ea54158-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e4433372-00c8-4e01-8813-4fed0ea54158" (UID: "e4433372-00c8-4e01-8813-4fed0ea54158"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.245421 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4433372-00c8-4e01-8813-4fed0ea54158-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4433372-00c8-4e01-8813-4fed0ea54158" (UID: "e4433372-00c8-4e01-8813-4fed0ea54158"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.261731 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4433372-00c8-4e01-8813-4fed0ea54158-config-data" (OuterVolumeSpecName: "config-data") pod "e4433372-00c8-4e01-8813-4fed0ea54158" (UID: "e4433372-00c8-4e01-8813-4fed0ea54158"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.315151 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4433372-00c8-4e01-8813-4fed0ea54158-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.315197 4955 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4433372-00c8-4e01-8813-4fed0ea54158-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.315210 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4433372-00c8-4e01-8813-4fed0ea54158-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.315223 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6w5r\" (UniqueName: \"kubernetes.io/projected/e4433372-00c8-4e01-8813-4fed0ea54158-kube-api-access-q6w5r\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.645091 4955 generic.go:334] "Generic (PLEG): container finished" podID="e4433372-00c8-4e01-8813-4fed0ea54158" containerID="deff51c1dfe6f36833195bc2ecc18691189dcd83c0edf560baaa7a9aff6f58ab" exitCode=0 Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.645138 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6d8c5fddf-xssbs" event={"ID":"e4433372-00c8-4e01-8813-4fed0ea54158","Type":"ContainerDied","Data":"deff51c1dfe6f36833195bc2ecc18691189dcd83c0edf560baaa7a9aff6f58ab"} Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.645414 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6d8c5fddf-xssbs" event={"ID":"e4433372-00c8-4e01-8813-4fed0ea54158","Type":"ContainerDied","Data":"8ee1daeb525406605ae2afbfe7d16018dadf517e30c20f20901b9280f4932f0f"} Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.645435 4955 scope.go:117] "RemoveContainer" containerID="deff51c1dfe6f36833195bc2ecc18691189dcd83c0edf560baaa7a9aff6f58ab" Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.645203 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6d8c5fddf-xssbs" Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.650904 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d60f9606-d4b3-4191-9966-53e71096871c","Type":"ContainerStarted","Data":"44c529185ddba95a89fa6b28c5e3c9310e077e5384c7ca56cf31341378e65c25"} Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.650939 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d60f9606-d4b3-4191-9966-53e71096871c","Type":"ContainerStarted","Data":"8e90caf7a64bbce7af3c7f65f290347356037dd8a456c77a1c63241dfe7db9a4"} Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.684973 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6d8c5fddf-xssbs"] Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.686438 4955 scope.go:117] "RemoveContainer" containerID="deff51c1dfe6f36833195bc2ecc18691189dcd83c0edf560baaa7a9aff6f58ab" Feb 02 13:20:26 crc kubenswrapper[4955]: E0202 13:20:26.687020 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deff51c1dfe6f36833195bc2ecc18691189dcd83c0edf560baaa7a9aff6f58ab\": container with ID starting with deff51c1dfe6f36833195bc2ecc18691189dcd83c0edf560baaa7a9aff6f58ab not found: ID does not exist" containerID="deff51c1dfe6f36833195bc2ecc18691189dcd83c0edf560baaa7a9aff6f58ab" Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.687073 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deff51c1dfe6f36833195bc2ecc18691189dcd83c0edf560baaa7a9aff6f58ab"} err="failed to get container status \"deff51c1dfe6f36833195bc2ecc18691189dcd83c0edf560baaa7a9aff6f58ab\": rpc error: code = NotFound desc = could not find container \"deff51c1dfe6f36833195bc2ecc18691189dcd83c0edf560baaa7a9aff6f58ab\": container with ID starting with deff51c1dfe6f36833195bc2ecc18691189dcd83c0edf560baaa7a9aff6f58ab not found: ID does not exist" Feb 02 13:20:26 crc kubenswrapper[4955]: I0202 13:20:26.697225 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-6d8c5fddf-xssbs"] Feb 02 13:20:27 crc kubenswrapper[4955]: I0202 13:20:27.763814 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4433372-00c8-4e01-8813-4fed0ea54158" path="/var/lib/kubelet/pods/e4433372-00c8-4e01-8813-4fed0ea54158/volumes" Feb 02 13:20:29 crc kubenswrapper[4955]: I0202 13:20:29.689132 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d60f9606-d4b3-4191-9966-53e71096871c","Type":"ContainerStarted","Data":"f92f285272363f5f248c1606dc6210a0f71b2cb66022d8111ac4d74f5e03fe54"} Feb 02 13:20:29 crc kubenswrapper[4955]: I0202 13:20:29.689717 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 13:20:29 crc kubenswrapper[4955]: I0202 13:20:29.689593 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d60f9606-d4b3-4191-9966-53e71096871c" containerName="proxy-httpd" containerID="cri-o://f92f285272363f5f248c1606dc6210a0f71b2cb66022d8111ac4d74f5e03fe54" gracePeriod=30 Feb 02 13:20:29 crc kubenswrapper[4955]: I0202 13:20:29.689611 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d60f9606-d4b3-4191-9966-53e71096871c" containerName="sg-core" containerID="cri-o://44c529185ddba95a89fa6b28c5e3c9310e077e5384c7ca56cf31341378e65c25" gracePeriod=30 Feb 02 13:20:29 crc kubenswrapper[4955]: I0202 13:20:29.689627 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d60f9606-d4b3-4191-9966-53e71096871c" containerName="ceilometer-notification-agent" containerID="cri-o://8e90caf7a64bbce7af3c7f65f290347356037dd8a456c77a1c63241dfe7db9a4" gracePeriod=30 Feb 02 13:20:29 crc kubenswrapper[4955]: I0202 13:20:29.689261 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d60f9606-d4b3-4191-9966-53e71096871c" containerName="ceilometer-central-agent" containerID="cri-o://733e7fca041b5538779909c17a99e257d114c752a36bcc891d015644b8f1c4df" gracePeriod=30 Feb 02 13:20:29 crc kubenswrapper[4955]: I0202 13:20:29.712258 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=13.273153574 podStartE2EDuration="17.71224086s" podCreationTimestamp="2026-02-02 13:20:12 +0000 UTC" firstStartedPulling="2026-02-02 13:20:24.124454816 +0000 UTC m=+1075.036791266" lastFinishedPulling="2026-02-02 13:20:28.563542102 +0000 UTC m=+1079.475878552" observedRunningTime="2026-02-02 13:20:29.711276697 +0000 UTC m=+1080.623613237" watchObservedRunningTime="2026-02-02 13:20:29.71224086 +0000 UTC m=+1080.624577310" Feb 02 13:20:30 crc kubenswrapper[4955]: I0202 13:20:30.700958 4955 generic.go:334] "Generic (PLEG): container finished" podID="d60f9606-d4b3-4191-9966-53e71096871c" containerID="f92f285272363f5f248c1606dc6210a0f71b2cb66022d8111ac4d74f5e03fe54" exitCode=0 Feb 02 13:20:30 crc kubenswrapper[4955]: I0202 13:20:30.701438 4955 generic.go:334] "Generic (PLEG): container finished" podID="d60f9606-d4b3-4191-9966-53e71096871c" containerID="44c529185ddba95a89fa6b28c5e3c9310e077e5384c7ca56cf31341378e65c25" exitCode=2 Feb 02 13:20:30 crc kubenswrapper[4955]: I0202 13:20:30.701447 4955 generic.go:334] "Generic (PLEG): container finished" podID="d60f9606-d4b3-4191-9966-53e71096871c" containerID="8e90caf7a64bbce7af3c7f65f290347356037dd8a456c77a1c63241dfe7db9a4" exitCode=0 Feb 02 13:20:30 crc kubenswrapper[4955]: I0202 13:20:30.701041 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d60f9606-d4b3-4191-9966-53e71096871c","Type":"ContainerDied","Data":"f92f285272363f5f248c1606dc6210a0f71b2cb66022d8111ac4d74f5e03fe54"} Feb 02 13:20:30 crc kubenswrapper[4955]: I0202 13:20:30.701482 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d60f9606-d4b3-4191-9966-53e71096871c","Type":"ContainerDied","Data":"44c529185ddba95a89fa6b28c5e3c9310e077e5384c7ca56cf31341378e65c25"} Feb 02 13:20:30 crc kubenswrapper[4955]: I0202 13:20:30.701497 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d60f9606-d4b3-4191-9966-53e71096871c","Type":"ContainerDied","Data":"8e90caf7a64bbce7af3c7f65f290347356037dd8a456c77a1c63241dfe7db9a4"} Feb 02 13:20:33 crc kubenswrapper[4955]: I0202 13:20:33.016932 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:20:33 crc kubenswrapper[4955]: I0202 13:20:33.017267 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.546912 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.674436 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d60f9606-d4b3-4191-9966-53e71096871c-log-httpd\") pod \"d60f9606-d4b3-4191-9966-53e71096871c\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.674661 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smcrd\" (UniqueName: \"kubernetes.io/projected/d60f9606-d4b3-4191-9966-53e71096871c-kube-api-access-smcrd\") pod \"d60f9606-d4b3-4191-9966-53e71096871c\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.674829 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-sg-core-conf-yaml\") pod \"d60f9606-d4b3-4191-9966-53e71096871c\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.674980 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-config-data\") pod \"d60f9606-d4b3-4191-9966-53e71096871c\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.675070 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-combined-ca-bundle\") pod \"d60f9606-d4b3-4191-9966-53e71096871c\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.675082 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d60f9606-d4b3-4191-9966-53e71096871c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d60f9606-d4b3-4191-9966-53e71096871c" (UID: "d60f9606-d4b3-4191-9966-53e71096871c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.675122 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-scripts\") pod \"d60f9606-d4b3-4191-9966-53e71096871c\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.675313 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d60f9606-d4b3-4191-9966-53e71096871c-run-httpd\") pod \"d60f9606-d4b3-4191-9966-53e71096871c\" (UID: \"d60f9606-d4b3-4191-9966-53e71096871c\") " Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.675629 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d60f9606-d4b3-4191-9966-53e71096871c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d60f9606-d4b3-4191-9966-53e71096871c" (UID: "d60f9606-d4b3-4191-9966-53e71096871c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.677282 4955 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d60f9606-d4b3-4191-9966-53e71096871c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.677308 4955 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d60f9606-d4b3-4191-9966-53e71096871c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.685658 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d60f9606-d4b3-4191-9966-53e71096871c-kube-api-access-smcrd" (OuterVolumeSpecName: "kube-api-access-smcrd") pod "d60f9606-d4b3-4191-9966-53e71096871c" (UID: "d60f9606-d4b3-4191-9966-53e71096871c"). InnerVolumeSpecName "kube-api-access-smcrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.690446 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-scripts" (OuterVolumeSpecName: "scripts") pod "d60f9606-d4b3-4191-9966-53e71096871c" (UID: "d60f9606-d4b3-4191-9966-53e71096871c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.721387 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d60f9606-d4b3-4191-9966-53e71096871c" (UID: "d60f9606-d4b3-4191-9966-53e71096871c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.741279 4955 generic.go:334] "Generic (PLEG): container finished" podID="d60f9606-d4b3-4191-9966-53e71096871c" containerID="733e7fca041b5538779909c17a99e257d114c752a36bcc891d015644b8f1c4df" exitCode=0 Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.741334 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d60f9606-d4b3-4191-9966-53e71096871c","Type":"ContainerDied","Data":"733e7fca041b5538779909c17a99e257d114c752a36bcc891d015644b8f1c4df"} Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.741365 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d60f9606-d4b3-4191-9966-53e71096871c","Type":"ContainerDied","Data":"4c842f90362733d0533e70ab829fe6e310a4762f560d304c4ac6ce561ce7e3e1"} Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.741388 4955 scope.go:117] "RemoveContainer" containerID="f92f285272363f5f248c1606dc6210a0f71b2cb66022d8111ac4d74f5e03fe54" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.741603 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.782858 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smcrd\" (UniqueName: \"kubernetes.io/projected/d60f9606-d4b3-4191-9966-53e71096871c-kube-api-access-smcrd\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.782920 4955 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.782936 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.783022 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d60f9606-d4b3-4191-9966-53e71096871c" (UID: "d60f9606-d4b3-4191-9966-53e71096871c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.785386 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-config-data" (OuterVolumeSpecName: "config-data") pod "d60f9606-d4b3-4191-9966-53e71096871c" (UID: "d60f9606-d4b3-4191-9966-53e71096871c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.793028 4955 scope.go:117] "RemoveContainer" containerID="44c529185ddba95a89fa6b28c5e3c9310e077e5384c7ca56cf31341378e65c25" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.821518 4955 scope.go:117] "RemoveContainer" containerID="8e90caf7a64bbce7af3c7f65f290347356037dd8a456c77a1c63241dfe7db9a4" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.857069 4955 scope.go:117] "RemoveContainer" containerID="733e7fca041b5538779909c17a99e257d114c752a36bcc891d015644b8f1c4df" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.876454 4955 scope.go:117] "RemoveContainer" containerID="f92f285272363f5f248c1606dc6210a0f71b2cb66022d8111ac4d74f5e03fe54" Feb 02 13:20:34 crc kubenswrapper[4955]: E0202 13:20:34.876919 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f92f285272363f5f248c1606dc6210a0f71b2cb66022d8111ac4d74f5e03fe54\": container with ID starting with f92f285272363f5f248c1606dc6210a0f71b2cb66022d8111ac4d74f5e03fe54 not found: ID does not exist" containerID="f92f285272363f5f248c1606dc6210a0f71b2cb66022d8111ac4d74f5e03fe54" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.876949 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f92f285272363f5f248c1606dc6210a0f71b2cb66022d8111ac4d74f5e03fe54"} err="failed to get container status \"f92f285272363f5f248c1606dc6210a0f71b2cb66022d8111ac4d74f5e03fe54\": rpc error: code = NotFound desc = could not find container \"f92f285272363f5f248c1606dc6210a0f71b2cb66022d8111ac4d74f5e03fe54\": container with ID starting with f92f285272363f5f248c1606dc6210a0f71b2cb66022d8111ac4d74f5e03fe54 not found: ID does not exist" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.876971 4955 scope.go:117] "RemoveContainer" containerID="44c529185ddba95a89fa6b28c5e3c9310e077e5384c7ca56cf31341378e65c25" Feb 02 13:20:34 crc kubenswrapper[4955]: E0202 13:20:34.877452 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44c529185ddba95a89fa6b28c5e3c9310e077e5384c7ca56cf31341378e65c25\": container with ID starting with 44c529185ddba95a89fa6b28c5e3c9310e077e5384c7ca56cf31341378e65c25 not found: ID does not exist" containerID="44c529185ddba95a89fa6b28c5e3c9310e077e5384c7ca56cf31341378e65c25" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.877477 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44c529185ddba95a89fa6b28c5e3c9310e077e5384c7ca56cf31341378e65c25"} err="failed to get container status \"44c529185ddba95a89fa6b28c5e3c9310e077e5384c7ca56cf31341378e65c25\": rpc error: code = NotFound desc = could not find container \"44c529185ddba95a89fa6b28c5e3c9310e077e5384c7ca56cf31341378e65c25\": container with ID starting with 44c529185ddba95a89fa6b28c5e3c9310e077e5384c7ca56cf31341378e65c25 not found: ID does not exist" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.877490 4955 scope.go:117] "RemoveContainer" containerID="8e90caf7a64bbce7af3c7f65f290347356037dd8a456c77a1c63241dfe7db9a4" Feb 02 13:20:34 crc kubenswrapper[4955]: E0202 13:20:34.877729 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e90caf7a64bbce7af3c7f65f290347356037dd8a456c77a1c63241dfe7db9a4\": container with ID starting with 8e90caf7a64bbce7af3c7f65f290347356037dd8a456c77a1c63241dfe7db9a4 not found: ID does not exist" containerID="8e90caf7a64bbce7af3c7f65f290347356037dd8a456c77a1c63241dfe7db9a4" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.877754 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e90caf7a64bbce7af3c7f65f290347356037dd8a456c77a1c63241dfe7db9a4"} err="failed to get container status \"8e90caf7a64bbce7af3c7f65f290347356037dd8a456c77a1c63241dfe7db9a4\": rpc error: code = NotFound desc = could not find container \"8e90caf7a64bbce7af3c7f65f290347356037dd8a456c77a1c63241dfe7db9a4\": container with ID starting with 8e90caf7a64bbce7af3c7f65f290347356037dd8a456c77a1c63241dfe7db9a4 not found: ID does not exist" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.877772 4955 scope.go:117] "RemoveContainer" containerID="733e7fca041b5538779909c17a99e257d114c752a36bcc891d015644b8f1c4df" Feb 02 13:20:34 crc kubenswrapper[4955]: E0202 13:20:34.878000 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"733e7fca041b5538779909c17a99e257d114c752a36bcc891d015644b8f1c4df\": container with ID starting with 733e7fca041b5538779909c17a99e257d114c752a36bcc891d015644b8f1c4df not found: ID does not exist" containerID="733e7fca041b5538779909c17a99e257d114c752a36bcc891d015644b8f1c4df" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.878028 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"733e7fca041b5538779909c17a99e257d114c752a36bcc891d015644b8f1c4df"} err="failed to get container status \"733e7fca041b5538779909c17a99e257d114c752a36bcc891d015644b8f1c4df\": rpc error: code = NotFound desc = could not find container \"733e7fca041b5538779909c17a99e257d114c752a36bcc891d015644b8f1c4df\": container with ID starting with 733e7fca041b5538779909c17a99e257d114c752a36bcc891d015644b8f1c4df not found: ID does not exist" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.885178 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:34 crc kubenswrapper[4955]: I0202 13:20:34.885236 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d60f9606-d4b3-4191-9966-53e71096871c-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.076691 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.085075 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.102706 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:20:35 crc kubenswrapper[4955]: E0202 13:20:35.103883 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d60f9606-d4b3-4191-9966-53e71096871c" containerName="sg-core" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.104116 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="d60f9606-d4b3-4191-9966-53e71096871c" containerName="sg-core" Feb 02 13:20:35 crc kubenswrapper[4955]: E0202 13:20:35.104213 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="247c58e7-931d-4356-a900-da1c877548cd" containerName="heat-api" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.104286 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="247c58e7-931d-4356-a900-da1c877548cd" containerName="heat-api" Feb 02 13:20:35 crc kubenswrapper[4955]: E0202 13:20:35.104362 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5634670e-87b2-4c94-a877-853ad21f32b2" containerName="heat-cfnapi" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.104429 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="5634670e-87b2-4c94-a877-853ad21f32b2" containerName="heat-cfnapi" Feb 02 13:20:35 crc kubenswrapper[4955]: E0202 13:20:35.104544 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d60f9606-d4b3-4191-9966-53e71096871c" containerName="proxy-httpd" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.104659 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="d60f9606-d4b3-4191-9966-53e71096871c" containerName="proxy-httpd" Feb 02 13:20:35 crc kubenswrapper[4955]: E0202 13:20:35.104748 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4433372-00c8-4e01-8813-4fed0ea54158" containerName="heat-engine" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.104816 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4433372-00c8-4e01-8813-4fed0ea54158" containerName="heat-engine" Feb 02 13:20:35 crc kubenswrapper[4955]: E0202 13:20:35.104895 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d60f9606-d4b3-4191-9966-53e71096871c" containerName="ceilometer-notification-agent" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.104965 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="d60f9606-d4b3-4191-9966-53e71096871c" containerName="ceilometer-notification-agent" Feb 02 13:20:35 crc kubenswrapper[4955]: E0202 13:20:35.105081 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d60f9606-d4b3-4191-9966-53e71096871c" containerName="ceilometer-central-agent" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.105170 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="d60f9606-d4b3-4191-9966-53e71096871c" containerName="ceilometer-central-agent" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.107997 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="247c58e7-931d-4356-a900-da1c877548cd" containerName="heat-api" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.108192 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="d60f9606-d4b3-4191-9966-53e71096871c" containerName="ceilometer-central-agent" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.108293 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="247c58e7-931d-4356-a900-da1c877548cd" containerName="heat-api" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.108360 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="d60f9606-d4b3-4191-9966-53e71096871c" containerName="sg-core" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.108415 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="5634670e-87b2-4c94-a877-853ad21f32b2" containerName="heat-cfnapi" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.108475 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="d60f9606-d4b3-4191-9966-53e71096871c" containerName="proxy-httpd" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.108532 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="5634670e-87b2-4c94-a877-853ad21f32b2" containerName="heat-cfnapi" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.108601 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4433372-00c8-4e01-8813-4fed0ea54158" containerName="heat-engine" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.108654 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="d60f9606-d4b3-4191-9966-53e71096871c" containerName="ceilometer-notification-agent" Feb 02 13:20:35 crc kubenswrapper[4955]: E0202 13:20:35.108910 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5634670e-87b2-4c94-a877-853ad21f32b2" containerName="heat-cfnapi" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.108985 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="5634670e-87b2-4c94-a877-853ad21f32b2" containerName="heat-cfnapi" Feb 02 13:20:35 crc kubenswrapper[4955]: E0202 13:20:35.109107 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="247c58e7-931d-4356-a900-da1c877548cd" containerName="heat-api" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.109157 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="247c58e7-931d-4356-a900-da1c877548cd" containerName="heat-api" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.110729 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.113855 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.114077 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.166650 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.190470 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66v6x\" (UniqueName: \"kubernetes.io/projected/587c4297-567e-4d8a-b24e-9792d2fb2ce0-kube-api-access-66v6x\") pod \"ceilometer-0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " pod="openstack/ceilometer-0" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.190510 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " pod="openstack/ceilometer-0" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.190547 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-scripts\") pod \"ceilometer-0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " pod="openstack/ceilometer-0" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.190637 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/587c4297-567e-4d8a-b24e-9792d2fb2ce0-run-httpd\") pod \"ceilometer-0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " pod="openstack/ceilometer-0" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.190670 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-config-data\") pod \"ceilometer-0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " pod="openstack/ceilometer-0" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.190707 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " pod="openstack/ceilometer-0" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.190776 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/587c4297-567e-4d8a-b24e-9792d2fb2ce0-log-httpd\") pod \"ceilometer-0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " pod="openstack/ceilometer-0" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.292952 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " pod="openstack/ceilometer-0" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.293040 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/587c4297-567e-4d8a-b24e-9792d2fb2ce0-log-httpd\") pod \"ceilometer-0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " pod="openstack/ceilometer-0" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.293126 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66v6x\" (UniqueName: \"kubernetes.io/projected/587c4297-567e-4d8a-b24e-9792d2fb2ce0-kube-api-access-66v6x\") pod \"ceilometer-0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " pod="openstack/ceilometer-0" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.293159 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " pod="openstack/ceilometer-0" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.293207 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-scripts\") pod \"ceilometer-0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " pod="openstack/ceilometer-0" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.293268 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/587c4297-567e-4d8a-b24e-9792d2fb2ce0-run-httpd\") pod \"ceilometer-0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " pod="openstack/ceilometer-0" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.293313 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-config-data\") pod \"ceilometer-0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " pod="openstack/ceilometer-0" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.293898 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/587c4297-567e-4d8a-b24e-9792d2fb2ce0-run-httpd\") pod \"ceilometer-0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " pod="openstack/ceilometer-0" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.293953 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/587c4297-567e-4d8a-b24e-9792d2fb2ce0-log-httpd\") pod \"ceilometer-0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " pod="openstack/ceilometer-0" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.296963 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " pod="openstack/ceilometer-0" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.297080 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-scripts\") pod \"ceilometer-0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " pod="openstack/ceilometer-0" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.297656 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " pod="openstack/ceilometer-0" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.299433 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-config-data\") pod \"ceilometer-0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " pod="openstack/ceilometer-0" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.310572 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66v6x\" (UniqueName: \"kubernetes.io/projected/587c4297-567e-4d8a-b24e-9792d2fb2ce0-kube-api-access-66v6x\") pod \"ceilometer-0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " pod="openstack/ceilometer-0" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.496647 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.727387 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d60f9606-d4b3-4191-9966-53e71096871c" path="/var/lib/kubelet/pods/d60f9606-d4b3-4191-9966-53e71096871c/volumes" Feb 02 13:20:35 crc kubenswrapper[4955]: I0202 13:20:35.954136 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:20:35 crc kubenswrapper[4955]: W0202 13:20:35.957423 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod587c4297_567e_4d8a_b24e_9792d2fb2ce0.slice/crio-c2d235584852c7a473e2d628462fb56dff5353a12dc7c4348b7a96f24e6e3767 WatchSource:0}: Error finding container c2d235584852c7a473e2d628462fb56dff5353a12dc7c4348b7a96f24e6e3767: Status 404 returned error can't find the container with id c2d235584852c7a473e2d628462fb56dff5353a12dc7c4348b7a96f24e6e3767 Feb 02 13:20:36 crc kubenswrapper[4955]: I0202 13:20:36.766632 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"587c4297-567e-4d8a-b24e-9792d2fb2ce0","Type":"ContainerStarted","Data":"051d158070c24fd0e37c95d45ac1b9afdafd3658cb080965aa5ff49f9bca9f97"} Feb 02 13:20:36 crc kubenswrapper[4955]: I0202 13:20:36.767227 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"587c4297-567e-4d8a-b24e-9792d2fb2ce0","Type":"ContainerStarted","Data":"c2d235584852c7a473e2d628462fb56dff5353a12dc7c4348b7a96f24e6e3767"} Feb 02 13:20:36 crc kubenswrapper[4955]: I0202 13:20:36.900614 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:20:37 crc kubenswrapper[4955]: I0202 13:20:37.778879 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"587c4297-567e-4d8a-b24e-9792d2fb2ce0","Type":"ContainerStarted","Data":"ac249dab2f04b38359fa74b43d9416b06532ae9481237a13d8ab50012a460a3d"} Feb 02 13:20:38 crc kubenswrapper[4955]: I0202 13:20:38.787947 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"587c4297-567e-4d8a-b24e-9792d2fb2ce0","Type":"ContainerStarted","Data":"3e0507be32b1957ef8548ac71e6ccf32cc69cde79bc4fea4eb3d8b1ca7224b88"} Feb 02 13:20:39 crc kubenswrapper[4955]: E0202 13:20:39.010543 4955 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd60f9606_d4b3_4191_9966_53e71096871c.slice/crio-conmon-44c529185ddba95a89fa6b28c5e3c9310e077e5384c7ca56cf31341378e65c25.scope\": RecentStats: unable to find data in memory cache]" Feb 02 13:20:40 crc kubenswrapper[4955]: I0202 13:20:40.809203 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"587c4297-567e-4d8a-b24e-9792d2fb2ce0","Type":"ContainerStarted","Data":"ab6e1ac367f60b10119eff5bf38466705e8a7e9b0c3339dbf57116ea5cdb496a"} Feb 02 13:20:40 crc kubenswrapper[4955]: I0202 13:20:40.809784 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="587c4297-567e-4d8a-b24e-9792d2fb2ce0" containerName="ceilometer-central-agent" containerID="cri-o://051d158070c24fd0e37c95d45ac1b9afdafd3658cb080965aa5ff49f9bca9f97" gracePeriod=30 Feb 02 13:20:40 crc kubenswrapper[4955]: I0202 13:20:40.810049 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 13:20:40 crc kubenswrapper[4955]: I0202 13:20:40.811534 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="587c4297-567e-4d8a-b24e-9792d2fb2ce0" containerName="proxy-httpd" containerID="cri-o://ab6e1ac367f60b10119eff5bf38466705e8a7e9b0c3339dbf57116ea5cdb496a" gracePeriod=30 Feb 02 13:20:40 crc kubenswrapper[4955]: I0202 13:20:40.811616 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="587c4297-567e-4d8a-b24e-9792d2fb2ce0" containerName="sg-core" containerID="cri-o://3e0507be32b1957ef8548ac71e6ccf32cc69cde79bc4fea4eb3d8b1ca7224b88" gracePeriod=30 Feb 02 13:20:40 crc kubenswrapper[4955]: I0202 13:20:40.811649 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="587c4297-567e-4d8a-b24e-9792d2fb2ce0" containerName="ceilometer-notification-agent" containerID="cri-o://ac249dab2f04b38359fa74b43d9416b06532ae9481237a13d8ab50012a460a3d" gracePeriod=30 Feb 02 13:20:40 crc kubenswrapper[4955]: I0202 13:20:40.820698 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8vmrp" event={"ID":"72065269-3b09-46d2-a98d-00f4f38d40a1","Type":"ContainerStarted","Data":"0b18ccb97e9e4af5a4d9edc1c57ce01982e1654f12d19b171ec8871f60ae6bf9"} Feb 02 13:20:40 crc kubenswrapper[4955]: I0202 13:20:40.840256 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.998822774 podStartE2EDuration="5.840232918s" podCreationTimestamp="2026-02-02 13:20:35 +0000 UTC" firstStartedPulling="2026-02-02 13:20:35.960354925 +0000 UTC m=+1086.872691375" lastFinishedPulling="2026-02-02 13:20:39.801765069 +0000 UTC m=+1090.714101519" observedRunningTime="2026-02-02 13:20:40.83033466 +0000 UTC m=+1091.742671120" watchObservedRunningTime="2026-02-02 13:20:40.840232918 +0000 UTC m=+1091.752569368" Feb 02 13:20:40 crc kubenswrapper[4955]: I0202 13:20:40.865982 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-8vmrp" podStartSLOduration=2.6340959489999998 podStartE2EDuration="33.865964129s" podCreationTimestamp="2026-02-02 13:20:07 +0000 UTC" firstStartedPulling="2026-02-02 13:20:08.911104656 +0000 UTC m=+1059.823441096" lastFinishedPulling="2026-02-02 13:20:40.142972826 +0000 UTC m=+1091.055309276" observedRunningTime="2026-02-02 13:20:40.861928252 +0000 UTC m=+1091.774264702" watchObservedRunningTime="2026-02-02 13:20:40.865964129 +0000 UTC m=+1091.778300579" Feb 02 13:20:41 crc kubenswrapper[4955]: I0202 13:20:41.833623 4955 generic.go:334] "Generic (PLEG): container finished" podID="587c4297-567e-4d8a-b24e-9792d2fb2ce0" containerID="ab6e1ac367f60b10119eff5bf38466705e8a7e9b0c3339dbf57116ea5cdb496a" exitCode=0 Feb 02 13:20:41 crc kubenswrapper[4955]: I0202 13:20:41.833973 4955 generic.go:334] "Generic (PLEG): container finished" podID="587c4297-567e-4d8a-b24e-9792d2fb2ce0" containerID="3e0507be32b1957ef8548ac71e6ccf32cc69cde79bc4fea4eb3d8b1ca7224b88" exitCode=2 Feb 02 13:20:41 crc kubenswrapper[4955]: I0202 13:20:41.833982 4955 generic.go:334] "Generic (PLEG): container finished" podID="587c4297-567e-4d8a-b24e-9792d2fb2ce0" containerID="ac249dab2f04b38359fa74b43d9416b06532ae9481237a13d8ab50012a460a3d" exitCode=0 Feb 02 13:20:41 crc kubenswrapper[4955]: I0202 13:20:41.833712 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"587c4297-567e-4d8a-b24e-9792d2fb2ce0","Type":"ContainerDied","Data":"ab6e1ac367f60b10119eff5bf38466705e8a7e9b0c3339dbf57116ea5cdb496a"} Feb 02 13:20:41 crc kubenswrapper[4955]: I0202 13:20:41.834018 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"587c4297-567e-4d8a-b24e-9792d2fb2ce0","Type":"ContainerDied","Data":"3e0507be32b1957ef8548ac71e6ccf32cc69cde79bc4fea4eb3d8b1ca7224b88"} Feb 02 13:20:41 crc kubenswrapper[4955]: I0202 13:20:41.834032 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"587c4297-567e-4d8a-b24e-9792d2fb2ce0","Type":"ContainerDied","Data":"ac249dab2f04b38359fa74b43d9416b06532ae9481237a13d8ab50012a460a3d"} Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.746045 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.781808 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-sg-core-conf-yaml\") pod \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.781917 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66v6x\" (UniqueName: \"kubernetes.io/projected/587c4297-567e-4d8a-b24e-9792d2fb2ce0-kube-api-access-66v6x\") pod \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.781944 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-combined-ca-bundle\") pod \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.781986 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/587c4297-567e-4d8a-b24e-9792d2fb2ce0-log-httpd\") pod \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.782032 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-scripts\") pod \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.782089 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-config-data\") pod \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.782117 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/587c4297-567e-4d8a-b24e-9792d2fb2ce0-run-httpd\") pod \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\" (UID: \"587c4297-567e-4d8a-b24e-9792d2fb2ce0\") " Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.787190 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/587c4297-567e-4d8a-b24e-9792d2fb2ce0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "587c4297-567e-4d8a-b24e-9792d2fb2ce0" (UID: "587c4297-567e-4d8a-b24e-9792d2fb2ce0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.787478 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/587c4297-567e-4d8a-b24e-9792d2fb2ce0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "587c4297-567e-4d8a-b24e-9792d2fb2ce0" (UID: "587c4297-567e-4d8a-b24e-9792d2fb2ce0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.791344 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-scripts" (OuterVolumeSpecName: "scripts") pod "587c4297-567e-4d8a-b24e-9792d2fb2ce0" (UID: "587c4297-567e-4d8a-b24e-9792d2fb2ce0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.805777 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/587c4297-567e-4d8a-b24e-9792d2fb2ce0-kube-api-access-66v6x" (OuterVolumeSpecName: "kube-api-access-66v6x") pod "587c4297-567e-4d8a-b24e-9792d2fb2ce0" (UID: "587c4297-567e-4d8a-b24e-9792d2fb2ce0"). InnerVolumeSpecName "kube-api-access-66v6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.830036 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "587c4297-567e-4d8a-b24e-9792d2fb2ce0" (UID: "587c4297-567e-4d8a-b24e-9792d2fb2ce0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.865648 4955 generic.go:334] "Generic (PLEG): container finished" podID="587c4297-567e-4d8a-b24e-9792d2fb2ce0" containerID="051d158070c24fd0e37c95d45ac1b9afdafd3658cb080965aa5ff49f9bca9f97" exitCode=0 Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.865723 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"587c4297-567e-4d8a-b24e-9792d2fb2ce0","Type":"ContainerDied","Data":"051d158070c24fd0e37c95d45ac1b9afdafd3658cb080965aa5ff49f9bca9f97"} Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.865755 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"587c4297-567e-4d8a-b24e-9792d2fb2ce0","Type":"ContainerDied","Data":"c2d235584852c7a473e2d628462fb56dff5353a12dc7c4348b7a96f24e6e3767"} Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.865776 4955 scope.go:117] "RemoveContainer" containerID="ab6e1ac367f60b10119eff5bf38466705e8a7e9b0c3339dbf57116ea5cdb496a" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.865927 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.871682 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "587c4297-567e-4d8a-b24e-9792d2fb2ce0" (UID: "587c4297-567e-4d8a-b24e-9792d2fb2ce0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.884198 4955 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/587c4297-567e-4d8a-b24e-9792d2fb2ce0-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.884225 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.884234 4955 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/587c4297-567e-4d8a-b24e-9792d2fb2ce0-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.884241 4955 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.884252 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66v6x\" (UniqueName: \"kubernetes.io/projected/587c4297-567e-4d8a-b24e-9792d2fb2ce0-kube-api-access-66v6x\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.884260 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.896097 4955 scope.go:117] "RemoveContainer" containerID="3e0507be32b1957ef8548ac71e6ccf32cc69cde79bc4fea4eb3d8b1ca7224b88" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.926378 4955 scope.go:117] "RemoveContainer" containerID="ac249dab2f04b38359fa74b43d9416b06532ae9481237a13d8ab50012a460a3d" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.937815 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-config-data" (OuterVolumeSpecName: "config-data") pod "587c4297-567e-4d8a-b24e-9792d2fb2ce0" (UID: "587c4297-567e-4d8a-b24e-9792d2fb2ce0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.951435 4955 scope.go:117] "RemoveContainer" containerID="051d158070c24fd0e37c95d45ac1b9afdafd3658cb080965aa5ff49f9bca9f97" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.971273 4955 scope.go:117] "RemoveContainer" containerID="ab6e1ac367f60b10119eff5bf38466705e8a7e9b0c3339dbf57116ea5cdb496a" Feb 02 13:20:44 crc kubenswrapper[4955]: E0202 13:20:44.971741 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab6e1ac367f60b10119eff5bf38466705e8a7e9b0c3339dbf57116ea5cdb496a\": container with ID starting with ab6e1ac367f60b10119eff5bf38466705e8a7e9b0c3339dbf57116ea5cdb496a not found: ID does not exist" containerID="ab6e1ac367f60b10119eff5bf38466705e8a7e9b0c3339dbf57116ea5cdb496a" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.971790 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab6e1ac367f60b10119eff5bf38466705e8a7e9b0c3339dbf57116ea5cdb496a"} err="failed to get container status \"ab6e1ac367f60b10119eff5bf38466705e8a7e9b0c3339dbf57116ea5cdb496a\": rpc error: code = NotFound desc = could not find container \"ab6e1ac367f60b10119eff5bf38466705e8a7e9b0c3339dbf57116ea5cdb496a\": container with ID starting with ab6e1ac367f60b10119eff5bf38466705e8a7e9b0c3339dbf57116ea5cdb496a not found: ID does not exist" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.971826 4955 scope.go:117] "RemoveContainer" containerID="3e0507be32b1957ef8548ac71e6ccf32cc69cde79bc4fea4eb3d8b1ca7224b88" Feb 02 13:20:44 crc kubenswrapper[4955]: E0202 13:20:44.973134 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e0507be32b1957ef8548ac71e6ccf32cc69cde79bc4fea4eb3d8b1ca7224b88\": container with ID starting with 3e0507be32b1957ef8548ac71e6ccf32cc69cde79bc4fea4eb3d8b1ca7224b88 not found: ID does not exist" containerID="3e0507be32b1957ef8548ac71e6ccf32cc69cde79bc4fea4eb3d8b1ca7224b88" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.973165 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e0507be32b1957ef8548ac71e6ccf32cc69cde79bc4fea4eb3d8b1ca7224b88"} err="failed to get container status \"3e0507be32b1957ef8548ac71e6ccf32cc69cde79bc4fea4eb3d8b1ca7224b88\": rpc error: code = NotFound desc = could not find container \"3e0507be32b1957ef8548ac71e6ccf32cc69cde79bc4fea4eb3d8b1ca7224b88\": container with ID starting with 3e0507be32b1957ef8548ac71e6ccf32cc69cde79bc4fea4eb3d8b1ca7224b88 not found: ID does not exist" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.973183 4955 scope.go:117] "RemoveContainer" containerID="ac249dab2f04b38359fa74b43d9416b06532ae9481237a13d8ab50012a460a3d" Feb 02 13:20:44 crc kubenswrapper[4955]: E0202 13:20:44.973431 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac249dab2f04b38359fa74b43d9416b06532ae9481237a13d8ab50012a460a3d\": container with ID starting with ac249dab2f04b38359fa74b43d9416b06532ae9481237a13d8ab50012a460a3d not found: ID does not exist" containerID="ac249dab2f04b38359fa74b43d9416b06532ae9481237a13d8ab50012a460a3d" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.973459 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac249dab2f04b38359fa74b43d9416b06532ae9481237a13d8ab50012a460a3d"} err="failed to get container status \"ac249dab2f04b38359fa74b43d9416b06532ae9481237a13d8ab50012a460a3d\": rpc error: code = NotFound desc = could not find container \"ac249dab2f04b38359fa74b43d9416b06532ae9481237a13d8ab50012a460a3d\": container with ID starting with ac249dab2f04b38359fa74b43d9416b06532ae9481237a13d8ab50012a460a3d not found: ID does not exist" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.973479 4955 scope.go:117] "RemoveContainer" containerID="051d158070c24fd0e37c95d45ac1b9afdafd3658cb080965aa5ff49f9bca9f97" Feb 02 13:20:44 crc kubenswrapper[4955]: E0202 13:20:44.973790 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"051d158070c24fd0e37c95d45ac1b9afdafd3658cb080965aa5ff49f9bca9f97\": container with ID starting with 051d158070c24fd0e37c95d45ac1b9afdafd3658cb080965aa5ff49f9bca9f97 not found: ID does not exist" containerID="051d158070c24fd0e37c95d45ac1b9afdafd3658cb080965aa5ff49f9bca9f97" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.973829 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"051d158070c24fd0e37c95d45ac1b9afdafd3658cb080965aa5ff49f9bca9f97"} err="failed to get container status \"051d158070c24fd0e37c95d45ac1b9afdafd3658cb080965aa5ff49f9bca9f97\": rpc error: code = NotFound desc = could not find container \"051d158070c24fd0e37c95d45ac1b9afdafd3658cb080965aa5ff49f9bca9f97\": container with ID starting with 051d158070c24fd0e37c95d45ac1b9afdafd3658cb080965aa5ff49f9bca9f97 not found: ID does not exist" Feb 02 13:20:44 crc kubenswrapper[4955]: I0202 13:20:44.986363 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/587c4297-567e-4d8a-b24e-9792d2fb2ce0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.209256 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.220585 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.232233 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:20:45 crc kubenswrapper[4955]: E0202 13:20:45.232601 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="587c4297-567e-4d8a-b24e-9792d2fb2ce0" containerName="proxy-httpd" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.232615 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="587c4297-567e-4d8a-b24e-9792d2fb2ce0" containerName="proxy-httpd" Feb 02 13:20:45 crc kubenswrapper[4955]: E0202 13:20:45.232627 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="587c4297-567e-4d8a-b24e-9792d2fb2ce0" containerName="ceilometer-central-agent" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.232633 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="587c4297-567e-4d8a-b24e-9792d2fb2ce0" containerName="ceilometer-central-agent" Feb 02 13:20:45 crc kubenswrapper[4955]: E0202 13:20:45.232643 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="587c4297-567e-4d8a-b24e-9792d2fb2ce0" containerName="ceilometer-notification-agent" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.232649 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="587c4297-567e-4d8a-b24e-9792d2fb2ce0" containerName="ceilometer-notification-agent" Feb 02 13:20:45 crc kubenswrapper[4955]: E0202 13:20:45.232660 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="587c4297-567e-4d8a-b24e-9792d2fb2ce0" containerName="sg-core" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.232666 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="587c4297-567e-4d8a-b24e-9792d2fb2ce0" containerName="sg-core" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.232852 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="587c4297-567e-4d8a-b24e-9792d2fb2ce0" containerName="ceilometer-central-agent" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.232869 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="587c4297-567e-4d8a-b24e-9792d2fb2ce0" containerName="proxy-httpd" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.232881 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="587c4297-567e-4d8a-b24e-9792d2fb2ce0" containerName="ceilometer-notification-agent" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.232889 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="587c4297-567e-4d8a-b24e-9792d2fb2ce0" containerName="sg-core" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.258981 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.264566 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.269091 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.269793 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.297387 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5df1fa3a-f74c-4f25-ade2-3cb472d02546-run-httpd\") pod \"ceilometer-0\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " pod="openstack/ceilometer-0" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.297503 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-scripts\") pod \"ceilometer-0\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " pod="openstack/ceilometer-0" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.297529 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5df1fa3a-f74c-4f25-ade2-3cb472d02546-log-httpd\") pod \"ceilometer-0\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " pod="openstack/ceilometer-0" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.297568 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-config-data\") pod \"ceilometer-0\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " pod="openstack/ceilometer-0" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.297659 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " pod="openstack/ceilometer-0" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.297776 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx99l\" (UniqueName: \"kubernetes.io/projected/5df1fa3a-f74c-4f25-ade2-3cb472d02546-kube-api-access-zx99l\") pod \"ceilometer-0\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " pod="openstack/ceilometer-0" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.297936 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " pod="openstack/ceilometer-0" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.399137 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-scripts\") pod \"ceilometer-0\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " pod="openstack/ceilometer-0" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.399192 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5df1fa3a-f74c-4f25-ade2-3cb472d02546-log-httpd\") pod \"ceilometer-0\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " pod="openstack/ceilometer-0" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.399222 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-config-data\") pod \"ceilometer-0\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " pod="openstack/ceilometer-0" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.399280 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " pod="openstack/ceilometer-0" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.399305 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx99l\" (UniqueName: \"kubernetes.io/projected/5df1fa3a-f74c-4f25-ade2-3cb472d02546-kube-api-access-zx99l\") pod \"ceilometer-0\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " pod="openstack/ceilometer-0" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.399361 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " pod="openstack/ceilometer-0" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.399427 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5df1fa3a-f74c-4f25-ade2-3cb472d02546-run-httpd\") pod \"ceilometer-0\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " pod="openstack/ceilometer-0" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.400412 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5df1fa3a-f74c-4f25-ade2-3cb472d02546-run-httpd\") pod \"ceilometer-0\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " pod="openstack/ceilometer-0" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.400942 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5df1fa3a-f74c-4f25-ade2-3cb472d02546-log-httpd\") pod \"ceilometer-0\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " pod="openstack/ceilometer-0" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.403503 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " pod="openstack/ceilometer-0" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.404703 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-scripts\") pod \"ceilometer-0\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " pod="openstack/ceilometer-0" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.405138 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-config-data\") pod \"ceilometer-0\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " pod="openstack/ceilometer-0" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.406238 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " pod="openstack/ceilometer-0" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.424460 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx99l\" (UniqueName: \"kubernetes.io/projected/5df1fa3a-f74c-4f25-ade2-3cb472d02546-kube-api-access-zx99l\") pod \"ceilometer-0\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " pod="openstack/ceilometer-0" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.585513 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:20:45 crc kubenswrapper[4955]: I0202 13:20:45.733648 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="587c4297-567e-4d8a-b24e-9792d2fb2ce0" path="/var/lib/kubelet/pods/587c4297-567e-4d8a-b24e-9792d2fb2ce0/volumes" Feb 02 13:20:46 crc kubenswrapper[4955]: I0202 13:20:46.036155 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:20:46 crc kubenswrapper[4955]: I0202 13:20:46.886161 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5df1fa3a-f74c-4f25-ade2-3cb472d02546","Type":"ContainerStarted","Data":"7d4bcf1def66b6c781f24fe3fa56ffd15e491ad548115023c60e6a8016d83e26"} Feb 02 13:20:46 crc kubenswrapper[4955]: I0202 13:20:46.886523 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5df1fa3a-f74c-4f25-ade2-3cb472d02546","Type":"ContainerStarted","Data":"f833c63cae21661d00659de07239350741f17f25513247a119f488d8e6c6adac"} Feb 02 13:20:47 crc kubenswrapper[4955]: I0202 13:20:47.897706 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5df1fa3a-f74c-4f25-ade2-3cb472d02546","Type":"ContainerStarted","Data":"9abf72807a12aaf083e4cc8e89156551292ddc0ced25247c78379f5930aa711d"} Feb 02 13:20:48 crc kubenswrapper[4955]: I0202 13:20:48.917147 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5df1fa3a-f74c-4f25-ade2-3cb472d02546","Type":"ContainerStarted","Data":"4b5f02ce076d438b552f28fed3c979903347281445e0a1b2544039c8aced7cf3"} Feb 02 13:20:49 crc kubenswrapper[4955]: E0202 13:20:49.254835 4955 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd60f9606_d4b3_4191_9966_53e71096871c.slice/crio-conmon-44c529185ddba95a89fa6b28c5e3c9310e077e5384c7ca56cf31341378e65c25.scope\": RecentStats: unable to find data in memory cache]" Feb 02 13:20:50 crc kubenswrapper[4955]: I0202 13:20:50.932741 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5df1fa3a-f74c-4f25-ade2-3cb472d02546","Type":"ContainerStarted","Data":"a7f4e63c76cd65dbdeb0abcf919fdd4b5415c6beb0d938ec0c10ef8f0d6a21bf"} Feb 02 13:20:50 crc kubenswrapper[4955]: I0202 13:20:50.933462 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 13:20:50 crc kubenswrapper[4955]: I0202 13:20:50.962623 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.527531381 podStartE2EDuration="5.96260861s" podCreationTimestamp="2026-02-02 13:20:45 +0000 UTC" firstStartedPulling="2026-02-02 13:20:46.029277767 +0000 UTC m=+1096.941614217" lastFinishedPulling="2026-02-02 13:20:50.464355006 +0000 UTC m=+1101.376691446" observedRunningTime="2026-02-02 13:20:50.960060259 +0000 UTC m=+1101.872396699" watchObservedRunningTime="2026-02-02 13:20:50.96260861 +0000 UTC m=+1101.874945060" Feb 02 13:20:55 crc kubenswrapper[4955]: I0202 13:20:55.972844 4955 generic.go:334] "Generic (PLEG): container finished" podID="72065269-3b09-46d2-a98d-00f4f38d40a1" containerID="0b18ccb97e9e4af5a4d9edc1c57ce01982e1654f12d19b171ec8871f60ae6bf9" exitCode=0 Feb 02 13:20:55 crc kubenswrapper[4955]: I0202 13:20:55.972950 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8vmrp" event={"ID":"72065269-3b09-46d2-a98d-00f4f38d40a1","Type":"ContainerDied","Data":"0b18ccb97e9e4af5a4d9edc1c57ce01982e1654f12d19b171ec8871f60ae6bf9"} Feb 02 13:20:57 crc kubenswrapper[4955]: I0202 13:20:57.401863 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8vmrp" Feb 02 13:20:57 crc kubenswrapper[4955]: I0202 13:20:57.517025 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72065269-3b09-46d2-a98d-00f4f38d40a1-combined-ca-bundle\") pod \"72065269-3b09-46d2-a98d-00f4f38d40a1\" (UID: \"72065269-3b09-46d2-a98d-00f4f38d40a1\") " Feb 02 13:20:57 crc kubenswrapper[4955]: I0202 13:20:57.517222 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72065269-3b09-46d2-a98d-00f4f38d40a1-config-data\") pod \"72065269-3b09-46d2-a98d-00f4f38d40a1\" (UID: \"72065269-3b09-46d2-a98d-00f4f38d40a1\") " Feb 02 13:20:57 crc kubenswrapper[4955]: I0202 13:20:57.517278 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72065269-3b09-46d2-a98d-00f4f38d40a1-scripts\") pod \"72065269-3b09-46d2-a98d-00f4f38d40a1\" (UID: \"72065269-3b09-46d2-a98d-00f4f38d40a1\") " Feb 02 13:20:57 crc kubenswrapper[4955]: I0202 13:20:57.517340 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdwnq\" (UniqueName: \"kubernetes.io/projected/72065269-3b09-46d2-a98d-00f4f38d40a1-kube-api-access-xdwnq\") pod \"72065269-3b09-46d2-a98d-00f4f38d40a1\" (UID: \"72065269-3b09-46d2-a98d-00f4f38d40a1\") " Feb 02 13:20:57 crc kubenswrapper[4955]: I0202 13:20:57.522830 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72065269-3b09-46d2-a98d-00f4f38d40a1-scripts" (OuterVolumeSpecName: "scripts") pod "72065269-3b09-46d2-a98d-00f4f38d40a1" (UID: "72065269-3b09-46d2-a98d-00f4f38d40a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:57 crc kubenswrapper[4955]: I0202 13:20:57.527226 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72065269-3b09-46d2-a98d-00f4f38d40a1-kube-api-access-xdwnq" (OuterVolumeSpecName: "kube-api-access-xdwnq") pod "72065269-3b09-46d2-a98d-00f4f38d40a1" (UID: "72065269-3b09-46d2-a98d-00f4f38d40a1"). InnerVolumeSpecName "kube-api-access-xdwnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:20:57 crc kubenswrapper[4955]: I0202 13:20:57.546108 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72065269-3b09-46d2-a98d-00f4f38d40a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72065269-3b09-46d2-a98d-00f4f38d40a1" (UID: "72065269-3b09-46d2-a98d-00f4f38d40a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:57 crc kubenswrapper[4955]: I0202 13:20:57.548361 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72065269-3b09-46d2-a98d-00f4f38d40a1-config-data" (OuterVolumeSpecName: "config-data") pod "72065269-3b09-46d2-a98d-00f4f38d40a1" (UID: "72065269-3b09-46d2-a98d-00f4f38d40a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:20:57 crc kubenswrapper[4955]: I0202 13:20:57.620594 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72065269-3b09-46d2-a98d-00f4f38d40a1-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:57 crc kubenswrapper[4955]: I0202 13:20:57.620631 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdwnq\" (UniqueName: \"kubernetes.io/projected/72065269-3b09-46d2-a98d-00f4f38d40a1-kube-api-access-xdwnq\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:57 crc kubenswrapper[4955]: I0202 13:20:57.620643 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72065269-3b09-46d2-a98d-00f4f38d40a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:57 crc kubenswrapper[4955]: I0202 13:20:57.620652 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72065269-3b09-46d2-a98d-00f4f38d40a1-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:20:57 crc kubenswrapper[4955]: I0202 13:20:57.994041 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8vmrp" event={"ID":"72065269-3b09-46d2-a98d-00f4f38d40a1","Type":"ContainerDied","Data":"7fea2abc6b63e2474929975ff3ce3bd81ae3527860a2d74f81d974c49fad2a15"} Feb 02 13:20:57 crc kubenswrapper[4955]: I0202 13:20:57.994413 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fea2abc6b63e2474929975ff3ce3bd81ae3527860a2d74f81d974c49fad2a15" Feb 02 13:20:57 crc kubenswrapper[4955]: I0202 13:20:57.994357 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8vmrp" Feb 02 13:20:58 crc kubenswrapper[4955]: I0202 13:20:58.105790 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 13:20:58 crc kubenswrapper[4955]: E0202 13:20:58.106294 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72065269-3b09-46d2-a98d-00f4f38d40a1" containerName="nova-cell0-conductor-db-sync" Feb 02 13:20:58 crc kubenswrapper[4955]: I0202 13:20:58.106318 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="72065269-3b09-46d2-a98d-00f4f38d40a1" containerName="nova-cell0-conductor-db-sync" Feb 02 13:20:58 crc kubenswrapper[4955]: I0202 13:20:58.106549 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="72065269-3b09-46d2-a98d-00f4f38d40a1" containerName="nova-cell0-conductor-db-sync" Feb 02 13:20:58 crc kubenswrapper[4955]: I0202 13:20:58.107333 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 13:20:58 crc kubenswrapper[4955]: I0202 13:20:58.112985 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zshhj" Feb 02 13:20:58 crc kubenswrapper[4955]: I0202 13:20:58.113004 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 13:20:58 crc kubenswrapper[4955]: I0202 13:20:58.118527 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 13:20:58 crc kubenswrapper[4955]: I0202 13:20:58.131107 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f768e1f2-169e-4773-8173-fd0d4f57d90d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f768e1f2-169e-4773-8173-fd0d4f57d90d\") " pod="openstack/nova-cell0-conductor-0" Feb 02 13:20:58 crc kubenswrapper[4955]: I0202 13:20:58.131246 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fggrz\" (UniqueName: \"kubernetes.io/projected/f768e1f2-169e-4773-8173-fd0d4f57d90d-kube-api-access-fggrz\") pod \"nova-cell0-conductor-0\" (UID: \"f768e1f2-169e-4773-8173-fd0d4f57d90d\") " pod="openstack/nova-cell0-conductor-0" Feb 02 13:20:58 crc kubenswrapper[4955]: I0202 13:20:58.131281 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f768e1f2-169e-4773-8173-fd0d4f57d90d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f768e1f2-169e-4773-8173-fd0d4f57d90d\") " pod="openstack/nova-cell0-conductor-0" Feb 02 13:20:58 crc kubenswrapper[4955]: I0202 13:20:58.232584 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fggrz\" (UniqueName: \"kubernetes.io/projected/f768e1f2-169e-4773-8173-fd0d4f57d90d-kube-api-access-fggrz\") pod \"nova-cell0-conductor-0\" (UID: \"f768e1f2-169e-4773-8173-fd0d4f57d90d\") " pod="openstack/nova-cell0-conductor-0" Feb 02 13:20:58 crc kubenswrapper[4955]: I0202 13:20:58.232665 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f768e1f2-169e-4773-8173-fd0d4f57d90d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f768e1f2-169e-4773-8173-fd0d4f57d90d\") " pod="openstack/nova-cell0-conductor-0" Feb 02 13:20:58 crc kubenswrapper[4955]: I0202 13:20:58.232712 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f768e1f2-169e-4773-8173-fd0d4f57d90d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f768e1f2-169e-4773-8173-fd0d4f57d90d\") " pod="openstack/nova-cell0-conductor-0" Feb 02 13:20:58 crc kubenswrapper[4955]: I0202 13:20:58.243613 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f768e1f2-169e-4773-8173-fd0d4f57d90d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f768e1f2-169e-4773-8173-fd0d4f57d90d\") " pod="openstack/nova-cell0-conductor-0" Feb 02 13:20:58 crc kubenswrapper[4955]: I0202 13:20:58.244318 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f768e1f2-169e-4773-8173-fd0d4f57d90d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f768e1f2-169e-4773-8173-fd0d4f57d90d\") " pod="openstack/nova-cell0-conductor-0" Feb 02 13:20:58 crc kubenswrapper[4955]: I0202 13:20:58.247608 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fggrz\" (UniqueName: \"kubernetes.io/projected/f768e1f2-169e-4773-8173-fd0d4f57d90d-kube-api-access-fggrz\") pod \"nova-cell0-conductor-0\" (UID: \"f768e1f2-169e-4773-8173-fd0d4f57d90d\") " pod="openstack/nova-cell0-conductor-0" Feb 02 13:20:58 crc kubenswrapper[4955]: I0202 13:20:58.440923 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 13:20:58 crc kubenswrapper[4955]: I0202 13:20:58.905848 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 13:20:59 crc kubenswrapper[4955]: I0202 13:20:59.005144 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f768e1f2-169e-4773-8173-fd0d4f57d90d","Type":"ContainerStarted","Data":"4970a729608799e1219fd3eedfd7e013030064761c75d0dcb374ddde0216d8dc"} Feb 02 13:20:59 crc kubenswrapper[4955]: E0202 13:20:59.489220 4955 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd60f9606_d4b3_4191_9966_53e71096871c.slice/crio-conmon-44c529185ddba95a89fa6b28c5e3c9310e077e5384c7ca56cf31341378e65c25.scope\": RecentStats: unable to find data in memory cache]" Feb 02 13:21:00 crc kubenswrapper[4955]: I0202 13:21:00.016332 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f768e1f2-169e-4773-8173-fd0d4f57d90d","Type":"ContainerStarted","Data":"da16752d1e0e8a4c42777efcf44ee65bdd38e4945bdc09e8e1287fde584241e2"} Feb 02 13:21:00 crc kubenswrapper[4955]: I0202 13:21:00.016470 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 02 13:21:03 crc kubenswrapper[4955]: I0202 13:21:03.016573 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:21:03 crc kubenswrapper[4955]: I0202 13:21:03.017290 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:21:08 crc kubenswrapper[4955]: I0202 13:21:08.467522 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 02 13:21:08 crc kubenswrapper[4955]: I0202 13:21:08.485351 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=10.485334511 podStartE2EDuration="10.485334511s" podCreationTimestamp="2026-02-02 13:20:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:21:00.053299427 +0000 UTC m=+1110.965635887" watchObservedRunningTime="2026-02-02 13:21:08.485334511 +0000 UTC m=+1119.397670961" Feb 02 13:21:08 crc kubenswrapper[4955]: I0202 13:21:08.939439 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-dxm48"] Feb 02 13:21:08 crc kubenswrapper[4955]: I0202 13:21:08.940719 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dxm48" Feb 02 13:21:08 crc kubenswrapper[4955]: I0202 13:21:08.944930 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 02 13:21:08 crc kubenswrapper[4955]: I0202 13:21:08.945151 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 02 13:21:08 crc kubenswrapper[4955]: I0202 13:21:08.958099 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-dxm48"] Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.036993 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35bfa86c-42fd-4c56-8b93-599e84fb52df-config-data\") pod \"nova-cell0-cell-mapping-dxm48\" (UID: \"35bfa86c-42fd-4c56-8b93-599e84fb52df\") " pod="openstack/nova-cell0-cell-mapping-dxm48" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.037056 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msz8q\" (UniqueName: \"kubernetes.io/projected/35bfa86c-42fd-4c56-8b93-599e84fb52df-kube-api-access-msz8q\") pod \"nova-cell0-cell-mapping-dxm48\" (UID: \"35bfa86c-42fd-4c56-8b93-599e84fb52df\") " pod="openstack/nova-cell0-cell-mapping-dxm48" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.037306 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35bfa86c-42fd-4c56-8b93-599e84fb52df-scripts\") pod \"nova-cell0-cell-mapping-dxm48\" (UID: \"35bfa86c-42fd-4c56-8b93-599e84fb52df\") " pod="openstack/nova-cell0-cell-mapping-dxm48" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.037330 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35bfa86c-42fd-4c56-8b93-599e84fb52df-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dxm48\" (UID: \"35bfa86c-42fd-4c56-8b93-599e84fb52df\") " pod="openstack/nova-cell0-cell-mapping-dxm48" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.121748 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.123203 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.125812 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.138839 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83014d43-e06b-4d7f-ae20-35da704852c7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"83014d43-e06b-4d7f-ae20-35da704852c7\") " pod="openstack/nova-api-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.139390 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35bfa86c-42fd-4c56-8b93-599e84fb52df-scripts\") pod \"nova-cell0-cell-mapping-dxm48\" (UID: \"35bfa86c-42fd-4c56-8b93-599e84fb52df\") " pod="openstack/nova-cell0-cell-mapping-dxm48" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.139504 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35bfa86c-42fd-4c56-8b93-599e84fb52df-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dxm48\" (UID: \"35bfa86c-42fd-4c56-8b93-599e84fb52df\") " pod="openstack/nova-cell0-cell-mapping-dxm48" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.139744 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35bfa86c-42fd-4c56-8b93-599e84fb52df-config-data\") pod \"nova-cell0-cell-mapping-dxm48\" (UID: \"35bfa86c-42fd-4c56-8b93-599e84fb52df\") " pod="openstack/nova-cell0-cell-mapping-dxm48" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.139843 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83014d43-e06b-4d7f-ae20-35da704852c7-logs\") pod \"nova-api-0\" (UID: \"83014d43-e06b-4d7f-ae20-35da704852c7\") " pod="openstack/nova-api-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.139929 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngssb\" (UniqueName: \"kubernetes.io/projected/83014d43-e06b-4d7f-ae20-35da704852c7-kube-api-access-ngssb\") pod \"nova-api-0\" (UID: \"83014d43-e06b-4d7f-ae20-35da704852c7\") " pod="openstack/nova-api-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.140005 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msz8q\" (UniqueName: \"kubernetes.io/projected/35bfa86c-42fd-4c56-8b93-599e84fb52df-kube-api-access-msz8q\") pod \"nova-cell0-cell-mapping-dxm48\" (UID: \"35bfa86c-42fd-4c56-8b93-599e84fb52df\") " pod="openstack/nova-cell0-cell-mapping-dxm48" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.140115 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83014d43-e06b-4d7f-ae20-35da704852c7-config-data\") pod \"nova-api-0\" (UID: \"83014d43-e06b-4d7f-ae20-35da704852c7\") " pod="openstack/nova-api-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.145003 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.147696 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35bfa86c-42fd-4c56-8b93-599e84fb52df-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dxm48\" (UID: \"35bfa86c-42fd-4c56-8b93-599e84fb52df\") " pod="openstack/nova-cell0-cell-mapping-dxm48" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.150702 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35bfa86c-42fd-4c56-8b93-599e84fb52df-scripts\") pod \"nova-cell0-cell-mapping-dxm48\" (UID: \"35bfa86c-42fd-4c56-8b93-599e84fb52df\") " pod="openstack/nova-cell0-cell-mapping-dxm48" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.156333 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35bfa86c-42fd-4c56-8b93-599e84fb52df-config-data\") pod \"nova-cell0-cell-mapping-dxm48\" (UID: \"35bfa86c-42fd-4c56-8b93-599e84fb52df\") " pod="openstack/nova-cell0-cell-mapping-dxm48" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.166590 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.175149 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.181179 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.218908 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msz8q\" (UniqueName: \"kubernetes.io/projected/35bfa86c-42fd-4c56-8b93-599e84fb52df-kube-api-access-msz8q\") pod \"nova-cell0-cell-mapping-dxm48\" (UID: \"35bfa86c-42fd-4c56-8b93-599e84fb52df\") " pod="openstack/nova-cell0-cell-mapping-dxm48" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.239083 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.251532 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7vcm\" (UniqueName: \"kubernetes.io/projected/07100899-547d-48d7-931c-f5f91916e3b5-kube-api-access-g7vcm\") pod \"nova-metadata-0\" (UID: \"07100899-547d-48d7-931c-f5f91916e3b5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.251707 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83014d43-e06b-4d7f-ae20-35da704852c7-logs\") pod \"nova-api-0\" (UID: \"83014d43-e06b-4d7f-ae20-35da704852c7\") " pod="openstack/nova-api-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.251755 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngssb\" (UniqueName: \"kubernetes.io/projected/83014d43-e06b-4d7f-ae20-35da704852c7-kube-api-access-ngssb\") pod \"nova-api-0\" (UID: \"83014d43-e06b-4d7f-ae20-35da704852c7\") " pod="openstack/nova-api-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.251822 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07100899-547d-48d7-931c-f5f91916e3b5-logs\") pod \"nova-metadata-0\" (UID: \"07100899-547d-48d7-931c-f5f91916e3b5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.251858 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07100899-547d-48d7-931c-f5f91916e3b5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"07100899-547d-48d7-931c-f5f91916e3b5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.251892 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83014d43-e06b-4d7f-ae20-35da704852c7-config-data\") pod \"nova-api-0\" (UID: \"83014d43-e06b-4d7f-ae20-35da704852c7\") " pod="openstack/nova-api-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.251921 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83014d43-e06b-4d7f-ae20-35da704852c7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"83014d43-e06b-4d7f-ae20-35da704852c7\") " pod="openstack/nova-api-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.252004 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07100899-547d-48d7-931c-f5f91916e3b5-config-data\") pod \"nova-metadata-0\" (UID: \"07100899-547d-48d7-931c-f5f91916e3b5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.252546 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83014d43-e06b-4d7f-ae20-35da704852c7-logs\") pod \"nova-api-0\" (UID: \"83014d43-e06b-4d7f-ae20-35da704852c7\") " pod="openstack/nova-api-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.258209 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83014d43-e06b-4d7f-ae20-35da704852c7-config-data\") pod \"nova-api-0\" (UID: \"83014d43-e06b-4d7f-ae20-35da704852c7\") " pod="openstack/nova-api-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.262685 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dxm48" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.268100 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83014d43-e06b-4d7f-ae20-35da704852c7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"83014d43-e06b-4d7f-ae20-35da704852c7\") " pod="openstack/nova-api-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.289633 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.291203 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.294932 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngssb\" (UniqueName: \"kubernetes.io/projected/83014d43-e06b-4d7f-ae20-35da704852c7-kube-api-access-ngssb\") pod \"nova-api-0\" (UID: \"83014d43-e06b-4d7f-ae20-35da704852c7\") " pod="openstack/nova-api-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.304948 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.310893 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.356171 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07100899-547d-48d7-931c-f5f91916e3b5-config-data\") pod \"nova-metadata-0\" (UID: \"07100899-547d-48d7-931c-f5f91916e3b5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.356349 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldl5r\" (UniqueName: \"kubernetes.io/projected/8efb81bd-5ea0-4eb0-98fd-96186da0e309-kube-api-access-ldl5r\") pod \"nova-scheduler-0\" (UID: \"8efb81bd-5ea0-4eb0-98fd-96186da0e309\") " pod="openstack/nova-scheduler-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.356376 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efb81bd-5ea0-4eb0-98fd-96186da0e309-config-data\") pod \"nova-scheduler-0\" (UID: \"8efb81bd-5ea0-4eb0-98fd-96186da0e309\") " pod="openstack/nova-scheduler-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.356398 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7vcm\" (UniqueName: \"kubernetes.io/projected/07100899-547d-48d7-931c-f5f91916e3b5-kube-api-access-g7vcm\") pod \"nova-metadata-0\" (UID: \"07100899-547d-48d7-931c-f5f91916e3b5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.356420 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efb81bd-5ea0-4eb0-98fd-96186da0e309-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8efb81bd-5ea0-4eb0-98fd-96186da0e309\") " pod="openstack/nova-scheduler-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.356592 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07100899-547d-48d7-931c-f5f91916e3b5-logs\") pod \"nova-metadata-0\" (UID: \"07100899-547d-48d7-931c-f5f91916e3b5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.356622 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07100899-547d-48d7-931c-f5f91916e3b5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"07100899-547d-48d7-931c-f5f91916e3b5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.357600 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.358968 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.360804 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07100899-547d-48d7-931c-f5f91916e3b5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"07100899-547d-48d7-931c-f5f91916e3b5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.366520 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07100899-547d-48d7-931c-f5f91916e3b5-logs\") pod \"nova-metadata-0\" (UID: \"07100899-547d-48d7-931c-f5f91916e3b5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.366663 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.374389 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07100899-547d-48d7-931c-f5f91916e3b5-config-data\") pod \"nova-metadata-0\" (UID: \"07100899-547d-48d7-931c-f5f91916e3b5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.398803 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7vcm\" (UniqueName: \"kubernetes.io/projected/07100899-547d-48d7-931c-f5f91916e3b5-kube-api-access-g7vcm\") pod \"nova-metadata-0\" (UID: \"07100899-547d-48d7-931c-f5f91916e3b5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.407094 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.442810 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.459604 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldl5r\" (UniqueName: \"kubernetes.io/projected/8efb81bd-5ea0-4eb0-98fd-96186da0e309-kube-api-access-ldl5r\") pod \"nova-scheduler-0\" (UID: \"8efb81bd-5ea0-4eb0-98fd-96186da0e309\") " pod="openstack/nova-scheduler-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.459653 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efb81bd-5ea0-4eb0-98fd-96186da0e309-config-data\") pod \"nova-scheduler-0\" (UID: \"8efb81bd-5ea0-4eb0-98fd-96186da0e309\") " pod="openstack/nova-scheduler-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.459684 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efb81bd-5ea0-4eb0-98fd-96186da0e309-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8efb81bd-5ea0-4eb0-98fd-96186da0e309\") " pod="openstack/nova-scheduler-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.459855 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88h6s\" (UniqueName: \"kubernetes.io/projected/5f617af2-0a24-4beb-a9d3-76928d6bc9e1-kube-api-access-88h6s\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f617af2-0a24-4beb-a9d3-76928d6bc9e1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.459891 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f617af2-0a24-4beb-a9d3-76928d6bc9e1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f617af2-0a24-4beb-a9d3-76928d6bc9e1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.459924 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f617af2-0a24-4beb-a9d3-76928d6bc9e1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f617af2-0a24-4beb-a9d3-76928d6bc9e1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.473368 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efb81bd-5ea0-4eb0-98fd-96186da0e309-config-data\") pod \"nova-scheduler-0\" (UID: \"8efb81bd-5ea0-4eb0-98fd-96186da0e309\") " pod="openstack/nova-scheduler-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.473886 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efb81bd-5ea0-4eb0-98fd-96186da0e309-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8efb81bd-5ea0-4eb0-98fd-96186da0e309\") " pod="openstack/nova-scheduler-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.478978 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.486417 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-fbzdk"] Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.490433 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.538344 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldl5r\" (UniqueName: \"kubernetes.io/projected/8efb81bd-5ea0-4eb0-98fd-96186da0e309-kube-api-access-ldl5r\") pod \"nova-scheduler-0\" (UID: \"8efb81bd-5ea0-4eb0-98fd-96186da0e309\") " pod="openstack/nova-scheduler-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.573166 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-dns-svc\") pod \"dnsmasq-dns-9b86998b5-fbzdk\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.573250 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-config\") pod \"dnsmasq-dns-9b86998b5-fbzdk\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.573275 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6g5z\" (UniqueName: \"kubernetes.io/projected/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-kube-api-access-f6g5z\") pod \"dnsmasq-dns-9b86998b5-fbzdk\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.573332 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-fbzdk\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.573370 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88h6s\" (UniqueName: \"kubernetes.io/projected/5f617af2-0a24-4beb-a9d3-76928d6bc9e1-kube-api-access-88h6s\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f617af2-0a24-4beb-a9d3-76928d6bc9e1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.573410 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-fbzdk\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.573445 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f617af2-0a24-4beb-a9d3-76928d6bc9e1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f617af2-0a24-4beb-a9d3-76928d6bc9e1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.573498 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f617af2-0a24-4beb-a9d3-76928d6bc9e1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f617af2-0a24-4beb-a9d3-76928d6bc9e1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.573783 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-fbzdk\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.580522 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f617af2-0a24-4beb-a9d3-76928d6bc9e1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f617af2-0a24-4beb-a9d3-76928d6bc9e1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.581934 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f617af2-0a24-4beb-a9d3-76928d6bc9e1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f617af2-0a24-4beb-a9d3-76928d6bc9e1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.589091 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-fbzdk"] Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.613425 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88h6s\" (UniqueName: \"kubernetes.io/projected/5f617af2-0a24-4beb-a9d3-76928d6bc9e1-kube-api-access-88h6s\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f617af2-0a24-4beb-a9d3-76928d6bc9e1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.677417 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-dns-svc\") pod \"dnsmasq-dns-9b86998b5-fbzdk\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.677463 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-config\") pod \"dnsmasq-dns-9b86998b5-fbzdk\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.677481 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6g5z\" (UniqueName: \"kubernetes.io/projected/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-kube-api-access-f6g5z\") pod \"dnsmasq-dns-9b86998b5-fbzdk\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.677503 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-fbzdk\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.677534 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-fbzdk\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.677637 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-fbzdk\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.678716 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-fbzdk\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.682507 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-fbzdk\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.682973 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-fbzdk\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.685748 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-config\") pod \"dnsmasq-dns-9b86998b5-fbzdk\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.686080 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-dns-svc\") pod \"dnsmasq-dns-9b86998b5-fbzdk\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.715577 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6g5z\" (UniqueName: \"kubernetes.io/projected/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-kube-api-access-f6g5z\") pod \"dnsmasq-dns-9b86998b5-fbzdk\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.786537 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.868083 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.895051 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:21:09 crc kubenswrapper[4955]: I0202 13:21:09.930526 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-dxm48"] Feb 02 13:21:10 crc kubenswrapper[4955]: E0202 13:21:10.059933 4955 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd60f9606_d4b3_4191_9966_53e71096871c.slice/crio-conmon-44c529185ddba95a89fa6b28c5e3c9310e077e5384c7ca56cf31341378e65c25.scope\": RecentStats: unable to find data in memory cache]" Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.171237 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dxm48" event={"ID":"35bfa86c-42fd-4c56-8b93-599e84fb52df","Type":"ContainerStarted","Data":"4e9c71a21be50487fd7f624e8befe65aaecb2e9b099707d494baeab0182e91e4"} Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.199189 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.354949 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.488871 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.606157 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.628272 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q5zzn"] Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.630725 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q5zzn" Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.634220 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.634465 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.653442 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q5zzn"] Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.822924 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-fbzdk"] Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.828652 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b673660-45c3-419e-af6f-66cb08d272e0-scripts\") pod \"nova-cell1-conductor-db-sync-q5zzn\" (UID: \"4b673660-45c3-419e-af6f-66cb08d272e0\") " pod="openstack/nova-cell1-conductor-db-sync-q5zzn" Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.828744 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdgjc\" (UniqueName: \"kubernetes.io/projected/4b673660-45c3-419e-af6f-66cb08d272e0-kube-api-access-vdgjc\") pod \"nova-cell1-conductor-db-sync-q5zzn\" (UID: \"4b673660-45c3-419e-af6f-66cb08d272e0\") " pod="openstack/nova-cell1-conductor-db-sync-q5zzn" Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.828824 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b673660-45c3-419e-af6f-66cb08d272e0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-q5zzn\" (UID: \"4b673660-45c3-419e-af6f-66cb08d272e0\") " pod="openstack/nova-cell1-conductor-db-sync-q5zzn" Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.828898 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b673660-45c3-419e-af6f-66cb08d272e0-config-data\") pod \"nova-cell1-conductor-db-sync-q5zzn\" (UID: \"4b673660-45c3-419e-af6f-66cb08d272e0\") " pod="openstack/nova-cell1-conductor-db-sync-q5zzn" Feb 02 13:21:10 crc kubenswrapper[4955]: W0202 13:21:10.829349 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97dd00fc_d7ac_4e8a_a7e0_920524b0fd21.slice/crio-02587679aa51fc370dd81e3937fae60c2fd75cb40287b60f36de373e19ed2c62 WatchSource:0}: Error finding container 02587679aa51fc370dd81e3937fae60c2fd75cb40287b60f36de373e19ed2c62: Status 404 returned error can't find the container with id 02587679aa51fc370dd81e3937fae60c2fd75cb40287b60f36de373e19ed2c62 Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.930996 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b673660-45c3-419e-af6f-66cb08d272e0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-q5zzn\" (UID: \"4b673660-45c3-419e-af6f-66cb08d272e0\") " pod="openstack/nova-cell1-conductor-db-sync-q5zzn" Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.931081 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b673660-45c3-419e-af6f-66cb08d272e0-config-data\") pod \"nova-cell1-conductor-db-sync-q5zzn\" (UID: \"4b673660-45c3-419e-af6f-66cb08d272e0\") " pod="openstack/nova-cell1-conductor-db-sync-q5zzn" Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.931131 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b673660-45c3-419e-af6f-66cb08d272e0-scripts\") pod \"nova-cell1-conductor-db-sync-q5zzn\" (UID: \"4b673660-45c3-419e-af6f-66cb08d272e0\") " pod="openstack/nova-cell1-conductor-db-sync-q5zzn" Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.931165 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdgjc\" (UniqueName: \"kubernetes.io/projected/4b673660-45c3-419e-af6f-66cb08d272e0-kube-api-access-vdgjc\") pod \"nova-cell1-conductor-db-sync-q5zzn\" (UID: \"4b673660-45c3-419e-af6f-66cb08d272e0\") " pod="openstack/nova-cell1-conductor-db-sync-q5zzn" Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.934732 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b673660-45c3-419e-af6f-66cb08d272e0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-q5zzn\" (UID: \"4b673660-45c3-419e-af6f-66cb08d272e0\") " pod="openstack/nova-cell1-conductor-db-sync-q5zzn" Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.935323 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b673660-45c3-419e-af6f-66cb08d272e0-config-data\") pod \"nova-cell1-conductor-db-sync-q5zzn\" (UID: \"4b673660-45c3-419e-af6f-66cb08d272e0\") " pod="openstack/nova-cell1-conductor-db-sync-q5zzn" Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.937085 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b673660-45c3-419e-af6f-66cb08d272e0-scripts\") pod \"nova-cell1-conductor-db-sync-q5zzn\" (UID: \"4b673660-45c3-419e-af6f-66cb08d272e0\") " pod="openstack/nova-cell1-conductor-db-sync-q5zzn" Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.947716 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdgjc\" (UniqueName: \"kubernetes.io/projected/4b673660-45c3-419e-af6f-66cb08d272e0-kube-api-access-vdgjc\") pod \"nova-cell1-conductor-db-sync-q5zzn\" (UID: \"4b673660-45c3-419e-af6f-66cb08d272e0\") " pod="openstack/nova-cell1-conductor-db-sync-q5zzn" Feb 02 13:21:10 crc kubenswrapper[4955]: I0202 13:21:10.957995 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q5zzn" Feb 02 13:21:11 crc kubenswrapper[4955]: I0202 13:21:11.193816 4955 generic.go:334] "Generic (PLEG): container finished" podID="97dd00fc-d7ac-4e8a-a7e0-920524b0fd21" containerID="0b58174174e0d71362114d23dd92fc13290e2f76262ab5c988ee158cf69b6c63" exitCode=0 Feb 02 13:21:11 crc kubenswrapper[4955]: I0202 13:21:11.193881 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" event={"ID":"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21","Type":"ContainerDied","Data":"0b58174174e0d71362114d23dd92fc13290e2f76262ab5c988ee158cf69b6c63"} Feb 02 13:21:11 crc kubenswrapper[4955]: I0202 13:21:11.193906 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" event={"ID":"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21","Type":"ContainerStarted","Data":"02587679aa51fc370dd81e3937fae60c2fd75cb40287b60f36de373e19ed2c62"} Feb 02 13:21:11 crc kubenswrapper[4955]: I0202 13:21:11.199767 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5f617af2-0a24-4beb-a9d3-76928d6bc9e1","Type":"ContainerStarted","Data":"7956db3826a1a3d3f88a92159a9d280d307023f5e7c3f01cb27ac70020cb20f1"} Feb 02 13:21:11 crc kubenswrapper[4955]: I0202 13:21:11.204826 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83014d43-e06b-4d7f-ae20-35da704852c7","Type":"ContainerStarted","Data":"787bdee69f49228560733961d5a3c6f334bf4066ce100b828bb6fd2181317ecc"} Feb 02 13:21:11 crc kubenswrapper[4955]: I0202 13:21:11.220395 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dxm48" event={"ID":"35bfa86c-42fd-4c56-8b93-599e84fb52df","Type":"ContainerStarted","Data":"f251c6b4795788977dd41e0277ee1fce2fd8e91dfa29760dadb6bebd0367034e"} Feb 02 13:21:11 crc kubenswrapper[4955]: I0202 13:21:11.225529 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07100899-547d-48d7-931c-f5f91916e3b5","Type":"ContainerStarted","Data":"d5bad269914f6cbed9ab5c8b3697eed2083baa838773e43feafc1fbfaf0687ef"} Feb 02 13:21:11 crc kubenswrapper[4955]: I0202 13:21:11.228662 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8efb81bd-5ea0-4eb0-98fd-96186da0e309","Type":"ContainerStarted","Data":"8d9df5b9e120f7dea0f909a1397749b73772acbc6b389dff442d822c3de45ec5"} Feb 02 13:21:11 crc kubenswrapper[4955]: I0202 13:21:11.287033 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-dxm48" podStartSLOduration=3.287011141 podStartE2EDuration="3.287011141s" podCreationTimestamp="2026-02-02 13:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:21:11.238249767 +0000 UTC m=+1122.150586217" watchObservedRunningTime="2026-02-02 13:21:11.287011141 +0000 UTC m=+1122.199347591" Feb 02 13:21:11 crc kubenswrapper[4955]: I0202 13:21:11.509413 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q5zzn"] Feb 02 13:21:12 crc kubenswrapper[4955]: I0202 13:21:12.240725 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" event={"ID":"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21","Type":"ContainerStarted","Data":"7f68da286d6d5de227f05cbf1a51162390846d3ed4f28864581f4a6bd2b3dea1"} Feb 02 13:21:12 crc kubenswrapper[4955]: I0202 13:21:12.241154 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:21:12 crc kubenswrapper[4955]: I0202 13:21:12.244186 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q5zzn" event={"ID":"4b673660-45c3-419e-af6f-66cb08d272e0","Type":"ContainerStarted","Data":"f2ecabd0323a48a97418b5e95039ea3e8010eb1ff6b95df11e39612af5422926"} Feb 02 13:21:12 crc kubenswrapper[4955]: I0202 13:21:12.244240 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q5zzn" event={"ID":"4b673660-45c3-419e-af6f-66cb08d272e0","Type":"ContainerStarted","Data":"0847bfe8cc0c2bbedb4ee0439185124b7973a90aa35c636c0a2ad74c0e7165ae"} Feb 02 13:21:12 crc kubenswrapper[4955]: I0202 13:21:12.273067 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" podStartSLOduration=3.273042506 podStartE2EDuration="3.273042506s" podCreationTimestamp="2026-02-02 13:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:21:12.266887087 +0000 UTC m=+1123.179223557" watchObservedRunningTime="2026-02-02 13:21:12.273042506 +0000 UTC m=+1123.185378956" Feb 02 13:21:12 crc kubenswrapper[4955]: I0202 13:21:12.304876 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-q5zzn" podStartSLOduration=2.304847109 podStartE2EDuration="2.304847109s" podCreationTimestamp="2026-02-02 13:21:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:21:12.294597139 +0000 UTC m=+1123.206933589" watchObservedRunningTime="2026-02-02 13:21:12.304847109 +0000 UTC m=+1123.217183559" Feb 02 13:21:12 crc kubenswrapper[4955]: I0202 13:21:12.559305 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 13:21:12 crc kubenswrapper[4955]: I0202 13:21:12.572540 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:21:15 crc kubenswrapper[4955]: I0202 13:21:15.623394 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 13:21:16 crc kubenswrapper[4955]: I0202 13:21:16.279537 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5f617af2-0a24-4beb-a9d3-76928d6bc9e1","Type":"ContainerStarted","Data":"e774ce2caac6d61131c7cf41e0b4e260f2f08321750db456b9756bb15287f270"} Feb 02 13:21:16 crc kubenswrapper[4955]: I0202 13:21:16.279838 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="5f617af2-0a24-4beb-a9d3-76928d6bc9e1" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://e774ce2caac6d61131c7cf41e0b4e260f2f08321750db456b9756bb15287f270" gracePeriod=30 Feb 02 13:21:16 crc kubenswrapper[4955]: I0202 13:21:16.282378 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83014d43-e06b-4d7f-ae20-35da704852c7","Type":"ContainerStarted","Data":"84b964b59b521654a67042365d8fb861f4908ffe5a2df45c9b53b735ba4a8ab6"} Feb 02 13:21:16 crc kubenswrapper[4955]: I0202 13:21:16.282415 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83014d43-e06b-4d7f-ae20-35da704852c7","Type":"ContainerStarted","Data":"2dd7a1e5f695263f63a5b85a8c5eafa2426c2d5c75bd7c1bf694bb3a925c9672"} Feb 02 13:21:16 crc kubenswrapper[4955]: I0202 13:21:16.284951 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07100899-547d-48d7-931c-f5f91916e3b5","Type":"ContainerStarted","Data":"4109a308b52284e8eab4ee07b93c9a384069b1bef243dc3d08cdb5b6f8bec11a"} Feb 02 13:21:16 crc kubenswrapper[4955]: I0202 13:21:16.285081 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07100899-547d-48d7-931c-f5f91916e3b5","Type":"ContainerStarted","Data":"5e0f3de50f2007e9d30d420808e71211c588ca4c24f5ad10dd9ab7d363d14880"} Feb 02 13:21:16 crc kubenswrapper[4955]: I0202 13:21:16.285234 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="07100899-547d-48d7-931c-f5f91916e3b5" containerName="nova-metadata-log" containerID="cri-o://5e0f3de50f2007e9d30d420808e71211c588ca4c24f5ad10dd9ab7d363d14880" gracePeriod=30 Feb 02 13:21:16 crc kubenswrapper[4955]: I0202 13:21:16.285405 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="07100899-547d-48d7-931c-f5f91916e3b5" containerName="nova-metadata-metadata" containerID="cri-o://4109a308b52284e8eab4ee07b93c9a384069b1bef243dc3d08cdb5b6f8bec11a" gracePeriod=30 Feb 02 13:21:16 crc kubenswrapper[4955]: I0202 13:21:16.288636 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8efb81bd-5ea0-4eb0-98fd-96186da0e309","Type":"ContainerStarted","Data":"8eb7f288cede9c16b3a95321773805f5d314d41dfe059af803f87d100ceea993"} Feb 02 13:21:16 crc kubenswrapper[4955]: I0202 13:21:16.310589 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.5899288240000002 podStartE2EDuration="7.310545816s" podCreationTimestamp="2026-02-02 13:21:09 +0000 UTC" firstStartedPulling="2026-02-02 13:21:10.615163562 +0000 UTC m=+1121.527500012" lastFinishedPulling="2026-02-02 13:21:15.335780554 +0000 UTC m=+1126.248117004" observedRunningTime="2026-02-02 13:21:16.298499544 +0000 UTC m=+1127.210836014" watchObservedRunningTime="2026-02-02 13:21:16.310545816 +0000 UTC m=+1127.222882266" Feb 02 13:21:16 crc kubenswrapper[4955]: I0202 13:21:16.325190 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.327297358 podStartE2EDuration="7.325169041s" podCreationTimestamp="2026-02-02 13:21:09 +0000 UTC" firstStartedPulling="2026-02-02 13:21:10.341063578 +0000 UTC m=+1121.253400028" lastFinishedPulling="2026-02-02 13:21:15.338935261 +0000 UTC m=+1126.251271711" observedRunningTime="2026-02-02 13:21:16.319230997 +0000 UTC m=+1127.231567447" watchObservedRunningTime="2026-02-02 13:21:16.325169041 +0000 UTC m=+1127.237505491" Feb 02 13:21:16 crc kubenswrapper[4955]: I0202 13:21:16.343135 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.419729852 podStartE2EDuration="7.343115237s" podCreationTimestamp="2026-02-02 13:21:09 +0000 UTC" firstStartedPulling="2026-02-02 13:21:10.414254374 +0000 UTC m=+1121.326590824" lastFinishedPulling="2026-02-02 13:21:15.337639739 +0000 UTC m=+1126.249976209" observedRunningTime="2026-02-02 13:21:16.338477974 +0000 UTC m=+1127.250814424" watchObservedRunningTime="2026-02-02 13:21:16.343115237 +0000 UTC m=+1127.255451677" Feb 02 13:21:16 crc kubenswrapper[4955]: I0202 13:21:16.359460 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.526596816 podStartE2EDuration="7.359441863s" podCreationTimestamp="2026-02-02 13:21:09 +0000 UTC" firstStartedPulling="2026-02-02 13:21:10.50470312 +0000 UTC m=+1121.417039580" lastFinishedPulling="2026-02-02 13:21:15.337548177 +0000 UTC m=+1126.249884627" observedRunningTime="2026-02-02 13:21:16.358240824 +0000 UTC m=+1127.270577294" watchObservedRunningTime="2026-02-02 13:21:16.359441863 +0000 UTC m=+1127.271778313" Feb 02 13:21:17 crc kubenswrapper[4955]: I0202 13:21:17.301516 4955 generic.go:334] "Generic (PLEG): container finished" podID="07100899-547d-48d7-931c-f5f91916e3b5" containerID="4109a308b52284e8eab4ee07b93c9a384069b1bef243dc3d08cdb5b6f8bec11a" exitCode=0 Feb 02 13:21:17 crc kubenswrapper[4955]: I0202 13:21:17.301852 4955 generic.go:334] "Generic (PLEG): container finished" podID="07100899-547d-48d7-931c-f5f91916e3b5" containerID="5e0f3de50f2007e9d30d420808e71211c588ca4c24f5ad10dd9ab7d363d14880" exitCode=143 Feb 02 13:21:17 crc kubenswrapper[4955]: I0202 13:21:17.301585 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07100899-547d-48d7-931c-f5f91916e3b5","Type":"ContainerDied","Data":"4109a308b52284e8eab4ee07b93c9a384069b1bef243dc3d08cdb5b6f8bec11a"} Feb 02 13:21:17 crc kubenswrapper[4955]: I0202 13:21:17.301939 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07100899-547d-48d7-931c-f5f91916e3b5","Type":"ContainerDied","Data":"5e0f3de50f2007e9d30d420808e71211c588ca4c24f5ad10dd9ab7d363d14880"} Feb 02 13:21:17 crc kubenswrapper[4955]: I0202 13:21:17.386622 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:21:17 crc kubenswrapper[4955]: I0202 13:21:17.568925 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07100899-547d-48d7-931c-f5f91916e3b5-combined-ca-bundle\") pod \"07100899-547d-48d7-931c-f5f91916e3b5\" (UID: \"07100899-547d-48d7-931c-f5f91916e3b5\") " Feb 02 13:21:17 crc kubenswrapper[4955]: I0202 13:21:17.569099 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07100899-547d-48d7-931c-f5f91916e3b5-config-data\") pod \"07100899-547d-48d7-931c-f5f91916e3b5\" (UID: \"07100899-547d-48d7-931c-f5f91916e3b5\") " Feb 02 13:21:17 crc kubenswrapper[4955]: I0202 13:21:17.569272 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7vcm\" (UniqueName: \"kubernetes.io/projected/07100899-547d-48d7-931c-f5f91916e3b5-kube-api-access-g7vcm\") pod \"07100899-547d-48d7-931c-f5f91916e3b5\" (UID: \"07100899-547d-48d7-931c-f5f91916e3b5\") " Feb 02 13:21:17 crc kubenswrapper[4955]: I0202 13:21:17.569306 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07100899-547d-48d7-931c-f5f91916e3b5-logs\") pod \"07100899-547d-48d7-931c-f5f91916e3b5\" (UID: \"07100899-547d-48d7-931c-f5f91916e3b5\") " Feb 02 13:21:17 crc kubenswrapper[4955]: I0202 13:21:17.569931 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07100899-547d-48d7-931c-f5f91916e3b5-logs" (OuterVolumeSpecName: "logs") pod "07100899-547d-48d7-931c-f5f91916e3b5" (UID: "07100899-547d-48d7-931c-f5f91916e3b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:21:17 crc kubenswrapper[4955]: I0202 13:21:17.575776 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07100899-547d-48d7-931c-f5f91916e3b5-kube-api-access-g7vcm" (OuterVolumeSpecName: "kube-api-access-g7vcm") pod "07100899-547d-48d7-931c-f5f91916e3b5" (UID: "07100899-547d-48d7-931c-f5f91916e3b5"). InnerVolumeSpecName "kube-api-access-g7vcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:21:17 crc kubenswrapper[4955]: I0202 13:21:17.597877 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07100899-547d-48d7-931c-f5f91916e3b5-config-data" (OuterVolumeSpecName: "config-data") pod "07100899-547d-48d7-931c-f5f91916e3b5" (UID: "07100899-547d-48d7-931c-f5f91916e3b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:17 crc kubenswrapper[4955]: I0202 13:21:17.603648 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07100899-547d-48d7-931c-f5f91916e3b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07100899-547d-48d7-931c-f5f91916e3b5" (UID: "07100899-547d-48d7-931c-f5f91916e3b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:17 crc kubenswrapper[4955]: I0202 13:21:17.671797 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7vcm\" (UniqueName: \"kubernetes.io/projected/07100899-547d-48d7-931c-f5f91916e3b5-kube-api-access-g7vcm\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:17 crc kubenswrapper[4955]: I0202 13:21:17.671832 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07100899-547d-48d7-931c-f5f91916e3b5-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:17 crc kubenswrapper[4955]: I0202 13:21:17.671845 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07100899-547d-48d7-931c-f5f91916e3b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:17 crc kubenswrapper[4955]: I0202 13:21:17.671854 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07100899-547d-48d7-931c-f5f91916e3b5-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.311599 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07100899-547d-48d7-931c-f5f91916e3b5","Type":"ContainerDied","Data":"d5bad269914f6cbed9ab5c8b3697eed2083baa838773e43feafc1fbfaf0687ef"} Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.311663 4955 scope.go:117] "RemoveContainer" containerID="4109a308b52284e8eab4ee07b93c9a384069b1bef243dc3d08cdb5b6f8bec11a" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.311659 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.332547 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.338503 4955 scope.go:117] "RemoveContainer" containerID="5e0f3de50f2007e9d30d420808e71211c588ca4c24f5ad10dd9ab7d363d14880" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.357267 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.388096 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:21:18 crc kubenswrapper[4955]: E0202 13:21:18.388596 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07100899-547d-48d7-931c-f5f91916e3b5" containerName="nova-metadata-metadata" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.388615 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="07100899-547d-48d7-931c-f5f91916e3b5" containerName="nova-metadata-metadata" Feb 02 13:21:18 crc kubenswrapper[4955]: E0202 13:21:18.388631 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07100899-547d-48d7-931c-f5f91916e3b5" containerName="nova-metadata-log" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.388638 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="07100899-547d-48d7-931c-f5f91916e3b5" containerName="nova-metadata-log" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.388844 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="07100899-547d-48d7-931c-f5f91916e3b5" containerName="nova-metadata-log" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.388864 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="07100899-547d-48d7-931c-f5f91916e3b5" containerName="nova-metadata-metadata" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.389819 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.392288 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.392803 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.400904 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.485707 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18864d22-515b-4b82-befe-d73dbcf192b9-logs\") pod \"nova-metadata-0\" (UID: \"18864d22-515b-4b82-befe-d73dbcf192b9\") " pod="openstack/nova-metadata-0" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.485775 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18864d22-515b-4b82-befe-d73dbcf192b9-config-data\") pod \"nova-metadata-0\" (UID: \"18864d22-515b-4b82-befe-d73dbcf192b9\") " pod="openstack/nova-metadata-0" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.485884 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkrnx\" (UniqueName: \"kubernetes.io/projected/18864d22-515b-4b82-befe-d73dbcf192b9-kube-api-access-mkrnx\") pod \"nova-metadata-0\" (UID: \"18864d22-515b-4b82-befe-d73dbcf192b9\") " pod="openstack/nova-metadata-0" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.486231 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/18864d22-515b-4b82-befe-d73dbcf192b9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"18864d22-515b-4b82-befe-d73dbcf192b9\") " pod="openstack/nova-metadata-0" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.486318 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18864d22-515b-4b82-befe-d73dbcf192b9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"18864d22-515b-4b82-befe-d73dbcf192b9\") " pod="openstack/nova-metadata-0" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.588229 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18864d22-515b-4b82-befe-d73dbcf192b9-logs\") pod \"nova-metadata-0\" (UID: \"18864d22-515b-4b82-befe-d73dbcf192b9\") " pod="openstack/nova-metadata-0" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.588284 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18864d22-515b-4b82-befe-d73dbcf192b9-config-data\") pod \"nova-metadata-0\" (UID: \"18864d22-515b-4b82-befe-d73dbcf192b9\") " pod="openstack/nova-metadata-0" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.588314 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkrnx\" (UniqueName: \"kubernetes.io/projected/18864d22-515b-4b82-befe-d73dbcf192b9-kube-api-access-mkrnx\") pod \"nova-metadata-0\" (UID: \"18864d22-515b-4b82-befe-d73dbcf192b9\") " pod="openstack/nova-metadata-0" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.588392 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/18864d22-515b-4b82-befe-d73dbcf192b9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"18864d22-515b-4b82-befe-d73dbcf192b9\") " pod="openstack/nova-metadata-0" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.588430 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18864d22-515b-4b82-befe-d73dbcf192b9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"18864d22-515b-4b82-befe-d73dbcf192b9\") " pod="openstack/nova-metadata-0" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.588734 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18864d22-515b-4b82-befe-d73dbcf192b9-logs\") pod \"nova-metadata-0\" (UID: \"18864d22-515b-4b82-befe-d73dbcf192b9\") " pod="openstack/nova-metadata-0" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.594472 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18864d22-515b-4b82-befe-d73dbcf192b9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"18864d22-515b-4b82-befe-d73dbcf192b9\") " pod="openstack/nova-metadata-0" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.594522 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18864d22-515b-4b82-befe-d73dbcf192b9-config-data\") pod \"nova-metadata-0\" (UID: \"18864d22-515b-4b82-befe-d73dbcf192b9\") " pod="openstack/nova-metadata-0" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.594599 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/18864d22-515b-4b82-befe-d73dbcf192b9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"18864d22-515b-4b82-befe-d73dbcf192b9\") " pod="openstack/nova-metadata-0" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.607358 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkrnx\" (UniqueName: \"kubernetes.io/projected/18864d22-515b-4b82-befe-d73dbcf192b9-kube-api-access-mkrnx\") pod \"nova-metadata-0\" (UID: \"18864d22-515b-4b82-befe-d73dbcf192b9\") " pod="openstack/nova-metadata-0" Feb 02 13:21:18 crc kubenswrapper[4955]: I0202 13:21:18.708044 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:21:19 crc kubenswrapper[4955]: I0202 13:21:19.270991 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:21:19 crc kubenswrapper[4955]: I0202 13:21:19.322620 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"18864d22-515b-4b82-befe-d73dbcf192b9","Type":"ContainerStarted","Data":"d229db4040f97f53e4f727dd5ac035e7b5bf2dfba9288aadfd9a9ab75edbde05"} Feb 02 13:21:19 crc kubenswrapper[4955]: I0202 13:21:19.324431 4955 generic.go:334] "Generic (PLEG): container finished" podID="35bfa86c-42fd-4c56-8b93-599e84fb52df" containerID="f251c6b4795788977dd41e0277ee1fce2fd8e91dfa29760dadb6bebd0367034e" exitCode=0 Feb 02 13:21:19 crc kubenswrapper[4955]: I0202 13:21:19.324492 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dxm48" event={"ID":"35bfa86c-42fd-4c56-8b93-599e84fb52df","Type":"ContainerDied","Data":"f251c6b4795788977dd41e0277ee1fce2fd8e91dfa29760dadb6bebd0367034e"} Feb 02 13:21:19 crc kubenswrapper[4955]: I0202 13:21:19.446009 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 13:21:19 crc kubenswrapper[4955]: I0202 13:21:19.446060 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 13:21:19 crc kubenswrapper[4955]: I0202 13:21:19.714377 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 13:21:19 crc kubenswrapper[4955]: I0202 13:21:19.714615 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="4a5d9947-5c29-430e-b975-78024809faed" containerName="kube-state-metrics" containerID="cri-o://d933295a784ad31146861b17fbbe3681e8b8004cabd5d26bc1ae1ad26b70e093" gracePeriod=30 Feb 02 13:21:19 crc kubenswrapper[4955]: I0202 13:21:19.745496 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07100899-547d-48d7-931c-f5f91916e3b5" path="/var/lib/kubelet/pods/07100899-547d-48d7-931c-f5f91916e3b5/volumes" Feb 02 13:21:19 crc kubenswrapper[4955]: I0202 13:21:19.787549 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 13:21:19 crc kubenswrapper[4955]: I0202 13:21:19.787630 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 13:21:19 crc kubenswrapper[4955]: I0202 13:21:19.869277 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:19 crc kubenswrapper[4955]: I0202 13:21:19.901679 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.001280 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.024407 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-qw2ns"] Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.024699 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" podUID="02c53bcf-5b6a-4bc5-b677-b01a827904ff" containerName="dnsmasq-dns" containerID="cri-o://da73f062eb3f0dd87828ede93621219124e8d98bdf9bab1961dcc5e4f7e3e41f" gracePeriod=10 Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.427082 4955 generic.go:334] "Generic (PLEG): container finished" podID="4a5d9947-5c29-430e-b975-78024809faed" containerID="d933295a784ad31146861b17fbbe3681e8b8004cabd5d26bc1ae1ad26b70e093" exitCode=2 Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.427401 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4a5d9947-5c29-430e-b975-78024809faed","Type":"ContainerDied","Data":"d933295a784ad31146861b17fbbe3681e8b8004cabd5d26bc1ae1ad26b70e093"} Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.429173 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"18864d22-515b-4b82-befe-d73dbcf192b9","Type":"ContainerStarted","Data":"44ba72042e3511728fe4aa81a695a7b2aaaa8bf9d46f01bdd1d3ecb6cfce3a8a"} Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.429194 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"18864d22-515b-4b82-befe-d73dbcf192b9","Type":"ContainerStarted","Data":"2b66ee8e4e2efa1da40bb07ff47d1cb0c4298d600cd9624e3e7f72e3bb8a4659"} Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.432696 4955 generic.go:334] "Generic (PLEG): container finished" podID="02c53bcf-5b6a-4bc5-b677-b01a827904ff" containerID="da73f062eb3f0dd87828ede93621219124e8d98bdf9bab1961dcc5e4f7e3e41f" exitCode=0 Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.432849 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" event={"ID":"02c53bcf-5b6a-4bc5-b677-b01a827904ff","Type":"ContainerDied","Data":"da73f062eb3f0dd87828ede93621219124e8d98bdf9bab1961dcc5e4f7e3e41f"} Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.470864 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.4708393060000002 podStartE2EDuration="2.470839306s" podCreationTimestamp="2026-02-02 13:21:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:21:20.461032868 +0000 UTC m=+1131.373369328" watchObservedRunningTime="2026-02-02 13:21:20.470839306 +0000 UTC m=+1131.383176096" Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.505716 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.514950 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.533130 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="83014d43-e06b-4d7f-ae20-35da704852c7" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.533382 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="83014d43-e06b-4d7f-ae20-35da704852c7" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 13:21:20 crc kubenswrapper[4955]: E0202 13:21:20.568840 4955 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd60f9606_d4b3_4191_9966_53e71096871c.slice/crio-conmon-44c529185ddba95a89fa6b28c5e3c9310e077e5384c7ca56cf31341378e65c25.scope\": RecentStats: unable to find data in memory cache]" Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.676803 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzxtb\" (UniqueName: \"kubernetes.io/projected/4a5d9947-5c29-430e-b975-78024809faed-kube-api-access-dzxtb\") pod \"4a5d9947-5c29-430e-b975-78024809faed\" (UID: \"4a5d9947-5c29-430e-b975-78024809faed\") " Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.684348 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a5d9947-5c29-430e-b975-78024809faed-kube-api-access-dzxtb" (OuterVolumeSpecName: "kube-api-access-dzxtb") pod "4a5d9947-5c29-430e-b975-78024809faed" (UID: "4a5d9947-5c29-430e-b975-78024809faed"). InnerVolumeSpecName "kube-api-access-dzxtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.775432 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.780698 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzxtb\" (UniqueName: \"kubernetes.io/projected/4a5d9947-5c29-430e-b975-78024809faed-kube-api-access-dzxtb\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.881910 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-ovsdbserver-nb\") pod \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.882043 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-config\") pod \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.882107 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-dns-svc\") pod \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.882155 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-dns-swift-storage-0\") pod \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.882215 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-ovsdbserver-sb\") pod \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.882250 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqc8x\" (UniqueName: \"kubernetes.io/projected/02c53bcf-5b6a-4bc5-b677-b01a827904ff-kube-api-access-pqc8x\") pod \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.890021 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02c53bcf-5b6a-4bc5-b677-b01a827904ff-kube-api-access-pqc8x" (OuterVolumeSpecName: "kube-api-access-pqc8x") pod "02c53bcf-5b6a-4bc5-b677-b01a827904ff" (UID: "02c53bcf-5b6a-4bc5-b677-b01a827904ff"). InnerVolumeSpecName "kube-api-access-pqc8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.964676 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "02c53bcf-5b6a-4bc5-b677-b01a827904ff" (UID: "02c53bcf-5b6a-4bc5-b677-b01a827904ff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.983902 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-config" (OuterVolumeSpecName: "config") pod "02c53bcf-5b6a-4bc5-b677-b01a827904ff" (UID: "02c53bcf-5b6a-4bc5-b677-b01a827904ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.984199 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-config\") pod \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\" (UID: \"02c53bcf-5b6a-4bc5-b677-b01a827904ff\") " Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.984958 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.984981 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqc8x\" (UniqueName: \"kubernetes.io/projected/02c53bcf-5b6a-4bc5-b677-b01a827904ff-kube-api-access-pqc8x\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:20 crc kubenswrapper[4955]: W0202 13:21:20.985078 4955 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/02c53bcf-5b6a-4bc5-b677-b01a827904ff/volumes/kubernetes.io~configmap/config Feb 02 13:21:20 crc kubenswrapper[4955]: I0202 13:21:20.985095 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-config" (OuterVolumeSpecName: "config") pod "02c53bcf-5b6a-4bc5-b677-b01a827904ff" (UID: "02c53bcf-5b6a-4bc5-b677-b01a827904ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.010153 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "02c53bcf-5b6a-4bc5-b677-b01a827904ff" (UID: "02c53bcf-5b6a-4bc5-b677-b01a827904ff"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.014377 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "02c53bcf-5b6a-4bc5-b677-b01a827904ff" (UID: "02c53bcf-5b6a-4bc5-b677-b01a827904ff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.036181 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dxm48" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.042910 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "02c53bcf-5b6a-4bc5-b677-b01a827904ff" (UID: "02c53bcf-5b6a-4bc5-b677-b01a827904ff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.086261 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.086297 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.086308 4955 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.086319 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02c53bcf-5b6a-4bc5-b677-b01a827904ff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.187431 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35bfa86c-42fd-4c56-8b93-599e84fb52df-config-data\") pod \"35bfa86c-42fd-4c56-8b93-599e84fb52df\" (UID: \"35bfa86c-42fd-4c56-8b93-599e84fb52df\") " Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.187523 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35bfa86c-42fd-4c56-8b93-599e84fb52df-combined-ca-bundle\") pod \"35bfa86c-42fd-4c56-8b93-599e84fb52df\" (UID: \"35bfa86c-42fd-4c56-8b93-599e84fb52df\") " Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.187544 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35bfa86c-42fd-4c56-8b93-599e84fb52df-scripts\") pod \"35bfa86c-42fd-4c56-8b93-599e84fb52df\" (UID: \"35bfa86c-42fd-4c56-8b93-599e84fb52df\") " Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.187599 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msz8q\" (UniqueName: \"kubernetes.io/projected/35bfa86c-42fd-4c56-8b93-599e84fb52df-kube-api-access-msz8q\") pod \"35bfa86c-42fd-4c56-8b93-599e84fb52df\" (UID: \"35bfa86c-42fd-4c56-8b93-599e84fb52df\") " Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.190809 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35bfa86c-42fd-4c56-8b93-599e84fb52df-kube-api-access-msz8q" (OuterVolumeSpecName: "kube-api-access-msz8q") pod "35bfa86c-42fd-4c56-8b93-599e84fb52df" (UID: "35bfa86c-42fd-4c56-8b93-599e84fb52df"). InnerVolumeSpecName "kube-api-access-msz8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.190908 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35bfa86c-42fd-4c56-8b93-599e84fb52df-scripts" (OuterVolumeSpecName: "scripts") pod "35bfa86c-42fd-4c56-8b93-599e84fb52df" (UID: "35bfa86c-42fd-4c56-8b93-599e84fb52df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.213460 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35bfa86c-42fd-4c56-8b93-599e84fb52df-config-data" (OuterVolumeSpecName: "config-data") pod "35bfa86c-42fd-4c56-8b93-599e84fb52df" (UID: "35bfa86c-42fd-4c56-8b93-599e84fb52df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.222402 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35bfa86c-42fd-4c56-8b93-599e84fb52df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35bfa86c-42fd-4c56-8b93-599e84fb52df" (UID: "35bfa86c-42fd-4c56-8b93-599e84fb52df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.289687 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35bfa86c-42fd-4c56-8b93-599e84fb52df-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.289723 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35bfa86c-42fd-4c56-8b93-599e84fb52df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.289734 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35bfa86c-42fd-4c56-8b93-599e84fb52df-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.289742 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msz8q\" (UniqueName: \"kubernetes.io/projected/35bfa86c-42fd-4c56-8b93-599e84fb52df-kube-api-access-msz8q\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.442391 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4a5d9947-5c29-430e-b975-78024809faed","Type":"ContainerDied","Data":"642cab9c69c8256242b451a2ef2f270cbb537bb61528b4a41bd7fd574054dd6b"} Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.442440 4955 scope.go:117] "RemoveContainer" containerID="d933295a784ad31146861b17fbbe3681e8b8004cabd5d26bc1ae1ad26b70e093" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.442636 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.447317 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dxm48" event={"ID":"35bfa86c-42fd-4c56-8b93-599e84fb52df","Type":"ContainerDied","Data":"4e9c71a21be50487fd7f624e8befe65aaecb2e9b099707d494baeab0182e91e4"} Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.447355 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dxm48" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.447361 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e9c71a21be50487fd7f624e8befe65aaecb2e9b099707d494baeab0182e91e4" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.486972 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.487279 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="83014d43-e06b-4d7f-ae20-35da704852c7" containerName="nova-api-log" containerID="cri-o://2dd7a1e5f695263f63a5b85a8c5eafa2426c2d5c75bd7c1bf694bb3a925c9672" gracePeriod=30 Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.487378 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="83014d43-e06b-4d7f-ae20-35da704852c7" containerName="nova-api-api" containerID="cri-o://84b964b59b521654a67042365d8fb861f4908ffe5a2df45c9b53b735ba4a8ab6" gracePeriod=30 Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.501914 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.518088 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.518597 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-qw2ns" event={"ID":"02c53bcf-5b6a-4bc5-b677-b01a827904ff","Type":"ContainerDied","Data":"d8456db8a0114eea7b69c2469cec72a24c164b11602ff1bffd4a30fe22ac23d3"} Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.538810 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.552995 4955 scope.go:117] "RemoveContainer" containerID="da73f062eb3f0dd87828ede93621219124e8d98bdf9bab1961dcc5e4f7e3e41f" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.568785 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 13:21:21 crc kubenswrapper[4955]: E0202 13:21:21.569187 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35bfa86c-42fd-4c56-8b93-599e84fb52df" containerName="nova-manage" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.569200 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="35bfa86c-42fd-4c56-8b93-599e84fb52df" containerName="nova-manage" Feb 02 13:21:21 crc kubenswrapper[4955]: E0202 13:21:21.569217 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c53bcf-5b6a-4bc5-b677-b01a827904ff" containerName="init" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.569223 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c53bcf-5b6a-4bc5-b677-b01a827904ff" containerName="init" Feb 02 13:21:21 crc kubenswrapper[4955]: E0202 13:21:21.569235 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c53bcf-5b6a-4bc5-b677-b01a827904ff" containerName="dnsmasq-dns" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.569241 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c53bcf-5b6a-4bc5-b677-b01a827904ff" containerName="dnsmasq-dns" Feb 02 13:21:21 crc kubenswrapper[4955]: E0202 13:21:21.569261 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a5d9947-5c29-430e-b975-78024809faed" containerName="kube-state-metrics" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.569267 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a5d9947-5c29-430e-b975-78024809faed" containerName="kube-state-metrics" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.569463 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="35bfa86c-42fd-4c56-8b93-599e84fb52df" containerName="nova-manage" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.569477 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a5d9947-5c29-430e-b975-78024809faed" containerName="kube-state-metrics" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.569502 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c53bcf-5b6a-4bc5-b677-b01a827904ff" containerName="dnsmasq-dns" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.570124 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.573089 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.573204 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.582866 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.602356 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.637744 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-qw2ns"] Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.638984 4955 scope.go:117] "RemoveContainer" containerID="d42086a3134c4f15c2cb0867708e12c9828c701f75976d7a9d696c97e0434fae" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.681633 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-qw2ns"] Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.697363 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333\") " pod="openstack/kube-state-metrics-0" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.698256 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333\") " pod="openstack/kube-state-metrics-0" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.698477 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333\") " pod="openstack/kube-state-metrics-0" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.698672 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29q7q\" (UniqueName: \"kubernetes.io/projected/7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333-kube-api-access-29q7q\") pod \"kube-state-metrics-0\" (UID: \"7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333\") " pod="openstack/kube-state-metrics-0" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.716480 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.733558 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02c53bcf-5b6a-4bc5-b677-b01a827904ff" path="/var/lib/kubelet/pods/02c53bcf-5b6a-4bc5-b677-b01a827904ff/volumes" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.734131 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a5d9947-5c29-430e-b975-78024809faed" path="/var/lib/kubelet/pods/4a5d9947-5c29-430e-b975-78024809faed/volumes" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.800487 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333\") " pod="openstack/kube-state-metrics-0" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.800568 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29q7q\" (UniqueName: \"kubernetes.io/projected/7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333-kube-api-access-29q7q\") pod \"kube-state-metrics-0\" (UID: \"7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333\") " pod="openstack/kube-state-metrics-0" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.800702 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333\") " pod="openstack/kube-state-metrics-0" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.800826 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333\") " pod="openstack/kube-state-metrics-0" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.805159 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333\") " pod="openstack/kube-state-metrics-0" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.806602 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333\") " pod="openstack/kube-state-metrics-0" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.810940 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333\") " pod="openstack/kube-state-metrics-0" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.821565 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29q7q\" (UniqueName: \"kubernetes.io/projected/7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333-kube-api-access-29q7q\") pod \"kube-state-metrics-0\" (UID: \"7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333\") " pod="openstack/kube-state-metrics-0" Feb 02 13:21:21 crc kubenswrapper[4955]: I0202 13:21:21.896226 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 13:21:22 crc kubenswrapper[4955]: W0202 13:21:22.395886 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a7cd6f0_72ce_4cdd_ad99_481f5e3f8333.slice/crio-86f4747763e159a760da69a97e5ff56c49b3779c38da853c2f4c646839f1c62e WatchSource:0}: Error finding container 86f4747763e159a760da69a97e5ff56c49b3779c38da853c2f4c646839f1c62e: Status 404 returned error can't find the container with id 86f4747763e159a760da69a97e5ff56c49b3779c38da853c2f4c646839f1c62e Feb 02 13:21:22 crc kubenswrapper[4955]: I0202 13:21:22.397391 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 13:21:22 crc kubenswrapper[4955]: I0202 13:21:22.526724 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333","Type":"ContainerStarted","Data":"86f4747763e159a760da69a97e5ff56c49b3779c38da853c2f4c646839f1c62e"} Feb 02 13:21:22 crc kubenswrapper[4955]: I0202 13:21:22.529784 4955 generic.go:334] "Generic (PLEG): container finished" podID="83014d43-e06b-4d7f-ae20-35da704852c7" containerID="2dd7a1e5f695263f63a5b85a8c5eafa2426c2d5c75bd7c1bf694bb3a925c9672" exitCode=143 Feb 02 13:21:22 crc kubenswrapper[4955]: I0202 13:21:22.529854 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83014d43-e06b-4d7f-ae20-35da704852c7","Type":"ContainerDied","Data":"2dd7a1e5f695263f63a5b85a8c5eafa2426c2d5c75bd7c1bf694bb3a925c9672"} Feb 02 13:21:22 crc kubenswrapper[4955]: I0202 13:21:22.532511 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="18864d22-515b-4b82-befe-d73dbcf192b9" containerName="nova-metadata-log" containerID="cri-o://2b66ee8e4e2efa1da40bb07ff47d1cb0c4298d600cd9624e3e7f72e3bb8a4659" gracePeriod=30 Feb 02 13:21:22 crc kubenswrapper[4955]: I0202 13:21:22.532865 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8efb81bd-5ea0-4eb0-98fd-96186da0e309" containerName="nova-scheduler-scheduler" containerID="cri-o://8eb7f288cede9c16b3a95321773805f5d314d41dfe059af803f87d100ceea993" gracePeriod=30 Feb 02 13:21:22 crc kubenswrapper[4955]: I0202 13:21:22.533230 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="18864d22-515b-4b82-befe-d73dbcf192b9" containerName="nova-metadata-metadata" containerID="cri-o://44ba72042e3511728fe4aa81a695a7b2aaaa8bf9d46f01bdd1d3ecb6cfce3a8a" gracePeriod=30 Feb 02 13:21:22 crc kubenswrapper[4955]: I0202 13:21:22.698073 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:21:22 crc kubenswrapper[4955]: I0202 13:21:22.699088 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5df1fa3a-f74c-4f25-ade2-3cb472d02546" containerName="sg-core" containerID="cri-o://4b5f02ce076d438b552f28fed3c979903347281445e0a1b2544039c8aced7cf3" gracePeriod=30 Feb 02 13:21:22 crc kubenswrapper[4955]: I0202 13:21:22.699219 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5df1fa3a-f74c-4f25-ade2-3cb472d02546" containerName="proxy-httpd" containerID="cri-o://a7f4e63c76cd65dbdeb0abcf919fdd4b5415c6beb0d938ec0c10ef8f0d6a21bf" gracePeriod=30 Feb 02 13:21:22 crc kubenswrapper[4955]: I0202 13:21:22.699440 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5df1fa3a-f74c-4f25-ade2-3cb472d02546" containerName="ceilometer-notification-agent" containerID="cri-o://9abf72807a12aaf083e4cc8e89156551292ddc0ced25247c78379f5930aa711d" gracePeriod=30 Feb 02 13:21:22 crc kubenswrapper[4955]: I0202 13:21:22.699615 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5df1fa3a-f74c-4f25-ade2-3cb472d02546" containerName="ceilometer-central-agent" containerID="cri-o://7d4bcf1def66b6c781f24fe3fa56ffd15e491ad548115023c60e6a8016d83e26" gracePeriod=30 Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.549878 4955 generic.go:334] "Generic (PLEG): container finished" podID="5df1fa3a-f74c-4f25-ade2-3cb472d02546" containerID="a7f4e63c76cd65dbdeb0abcf919fdd4b5415c6beb0d938ec0c10ef8f0d6a21bf" exitCode=0 Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.550194 4955 generic.go:334] "Generic (PLEG): container finished" podID="5df1fa3a-f74c-4f25-ade2-3cb472d02546" containerID="4b5f02ce076d438b552f28fed3c979903347281445e0a1b2544039c8aced7cf3" exitCode=2 Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.549950 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5df1fa3a-f74c-4f25-ade2-3cb472d02546","Type":"ContainerDied","Data":"a7f4e63c76cd65dbdeb0abcf919fdd4b5415c6beb0d938ec0c10ef8f0d6a21bf"} Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.550241 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5df1fa3a-f74c-4f25-ade2-3cb472d02546","Type":"ContainerDied","Data":"4b5f02ce076d438b552f28fed3c979903347281445e0a1b2544039c8aced7cf3"} Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.550256 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5df1fa3a-f74c-4f25-ade2-3cb472d02546","Type":"ContainerDied","Data":"9abf72807a12aaf083e4cc8e89156551292ddc0ced25247c78379f5930aa711d"} Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.550205 4955 generic.go:334] "Generic (PLEG): container finished" podID="5df1fa3a-f74c-4f25-ade2-3cb472d02546" containerID="9abf72807a12aaf083e4cc8e89156551292ddc0ced25247c78379f5930aa711d" exitCode=0 Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.550297 4955 generic.go:334] "Generic (PLEG): container finished" podID="5df1fa3a-f74c-4f25-ade2-3cb472d02546" containerID="7d4bcf1def66b6c781f24fe3fa56ffd15e491ad548115023c60e6a8016d83e26" exitCode=0 Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.550366 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5df1fa3a-f74c-4f25-ade2-3cb472d02546","Type":"ContainerDied","Data":"7d4bcf1def66b6c781f24fe3fa56ffd15e491ad548115023c60e6a8016d83e26"} Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.552883 4955 generic.go:334] "Generic (PLEG): container finished" podID="18864d22-515b-4b82-befe-d73dbcf192b9" containerID="44ba72042e3511728fe4aa81a695a7b2aaaa8bf9d46f01bdd1d3ecb6cfce3a8a" exitCode=0 Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.552906 4955 generic.go:334] "Generic (PLEG): container finished" podID="18864d22-515b-4b82-befe-d73dbcf192b9" containerID="2b66ee8e4e2efa1da40bb07ff47d1cb0c4298d600cd9624e3e7f72e3bb8a4659" exitCode=143 Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.552911 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"18864d22-515b-4b82-befe-d73dbcf192b9","Type":"ContainerDied","Data":"44ba72042e3511728fe4aa81a695a7b2aaaa8bf9d46f01bdd1d3ecb6cfce3a8a"} Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.552962 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"18864d22-515b-4b82-befe-d73dbcf192b9","Type":"ContainerDied","Data":"2b66ee8e4e2efa1da40bb07ff47d1cb0c4298d600cd9624e3e7f72e3bb8a4659"} Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.600955 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.736852 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18864d22-515b-4b82-befe-d73dbcf192b9-logs\") pod \"18864d22-515b-4b82-befe-d73dbcf192b9\" (UID: \"18864d22-515b-4b82-befe-d73dbcf192b9\") " Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.736890 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18864d22-515b-4b82-befe-d73dbcf192b9-config-data\") pod \"18864d22-515b-4b82-befe-d73dbcf192b9\" (UID: \"18864d22-515b-4b82-befe-d73dbcf192b9\") " Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.736999 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/18864d22-515b-4b82-befe-d73dbcf192b9-nova-metadata-tls-certs\") pod \"18864d22-515b-4b82-befe-d73dbcf192b9\" (UID: \"18864d22-515b-4b82-befe-d73dbcf192b9\") " Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.737045 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkrnx\" (UniqueName: \"kubernetes.io/projected/18864d22-515b-4b82-befe-d73dbcf192b9-kube-api-access-mkrnx\") pod \"18864d22-515b-4b82-befe-d73dbcf192b9\" (UID: \"18864d22-515b-4b82-befe-d73dbcf192b9\") " Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.737130 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18864d22-515b-4b82-befe-d73dbcf192b9-combined-ca-bundle\") pod \"18864d22-515b-4b82-befe-d73dbcf192b9\" (UID: \"18864d22-515b-4b82-befe-d73dbcf192b9\") " Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.737245 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18864d22-515b-4b82-befe-d73dbcf192b9-logs" (OuterVolumeSpecName: "logs") pod "18864d22-515b-4b82-befe-d73dbcf192b9" (UID: "18864d22-515b-4b82-befe-d73dbcf192b9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.737559 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18864d22-515b-4b82-befe-d73dbcf192b9-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.746785 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18864d22-515b-4b82-befe-d73dbcf192b9-kube-api-access-mkrnx" (OuterVolumeSpecName: "kube-api-access-mkrnx") pod "18864d22-515b-4b82-befe-d73dbcf192b9" (UID: "18864d22-515b-4b82-befe-d73dbcf192b9"). InnerVolumeSpecName "kube-api-access-mkrnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.774765 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18864d22-515b-4b82-befe-d73dbcf192b9-config-data" (OuterVolumeSpecName: "config-data") pod "18864d22-515b-4b82-befe-d73dbcf192b9" (UID: "18864d22-515b-4b82-befe-d73dbcf192b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.789084 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18864d22-515b-4b82-befe-d73dbcf192b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18864d22-515b-4b82-befe-d73dbcf192b9" (UID: "18864d22-515b-4b82-befe-d73dbcf192b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.797852 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18864d22-515b-4b82-befe-d73dbcf192b9-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "18864d22-515b-4b82-befe-d73dbcf192b9" (UID: "18864d22-515b-4b82-befe-d73dbcf192b9"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.839968 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18864d22-515b-4b82-befe-d73dbcf192b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.840008 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18864d22-515b-4b82-befe-d73dbcf192b9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.840020 4955 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/18864d22-515b-4b82-befe-d73dbcf192b9-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:23 crc kubenswrapper[4955]: I0202 13:21:23.840033 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkrnx\" (UniqueName: \"kubernetes.io/projected/18864d22-515b-4b82-befe-d73dbcf192b9-kube-api-access-mkrnx\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.250702 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.376531 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx99l\" (UniqueName: \"kubernetes.io/projected/5df1fa3a-f74c-4f25-ade2-3cb472d02546-kube-api-access-zx99l\") pod \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.377403 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-combined-ca-bundle\") pod \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.377434 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-sg-core-conf-yaml\") pod \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.377486 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5df1fa3a-f74c-4f25-ade2-3cb472d02546-log-httpd\") pod \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.377592 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-scripts\") pod \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.377685 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-config-data\") pod \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.377737 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5df1fa3a-f74c-4f25-ade2-3cb472d02546-run-httpd\") pod \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\" (UID: \"5df1fa3a-f74c-4f25-ade2-3cb472d02546\") " Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.378307 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5df1fa3a-f74c-4f25-ade2-3cb472d02546-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5df1fa3a-f74c-4f25-ade2-3cb472d02546" (UID: "5df1fa3a-f74c-4f25-ade2-3cb472d02546"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.378895 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5df1fa3a-f74c-4f25-ade2-3cb472d02546-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5df1fa3a-f74c-4f25-ade2-3cb472d02546" (UID: "5df1fa3a-f74c-4f25-ade2-3cb472d02546"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.382144 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5df1fa3a-f74c-4f25-ade2-3cb472d02546-kube-api-access-zx99l" (OuterVolumeSpecName: "kube-api-access-zx99l") pod "5df1fa3a-f74c-4f25-ade2-3cb472d02546" (UID: "5df1fa3a-f74c-4f25-ade2-3cb472d02546"). InnerVolumeSpecName "kube-api-access-zx99l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.384711 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.385438 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-scripts" (OuterVolumeSpecName: "scripts") pod "5df1fa3a-f74c-4f25-ade2-3cb472d02546" (UID: "5df1fa3a-f74c-4f25-ade2-3cb472d02546"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.435340 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5df1fa3a-f74c-4f25-ade2-3cb472d02546" (UID: "5df1fa3a-f74c-4f25-ade2-3cb472d02546"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.481006 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx99l\" (UniqueName: \"kubernetes.io/projected/5df1fa3a-f74c-4f25-ade2-3cb472d02546-kube-api-access-zx99l\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.481056 4955 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.481068 4955 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5df1fa3a-f74c-4f25-ade2-3cb472d02546-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.481078 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.481087 4955 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5df1fa3a-f74c-4f25-ade2-3cb472d02546-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.493898 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5df1fa3a-f74c-4f25-ade2-3cb472d02546" (UID: "5df1fa3a-f74c-4f25-ade2-3cb472d02546"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.510862 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-config-data" (OuterVolumeSpecName: "config-data") pod "5df1fa3a-f74c-4f25-ade2-3cb472d02546" (UID: "5df1fa3a-f74c-4f25-ade2-3cb472d02546"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.565225 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"18864d22-515b-4b82-befe-d73dbcf192b9","Type":"ContainerDied","Data":"d229db4040f97f53e4f727dd5ac035e7b5bf2dfba9288aadfd9a9ab75edbde05"} Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.565267 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.566117 4955 scope.go:117] "RemoveContainer" containerID="44ba72042e3511728fe4aa81a695a7b2aaaa8bf9d46f01bdd1d3ecb6cfce3a8a" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.567630 4955 generic.go:334] "Generic (PLEG): container finished" podID="8efb81bd-5ea0-4eb0-98fd-96186da0e309" containerID="8eb7f288cede9c16b3a95321773805f5d314d41dfe059af803f87d100ceea993" exitCode=0 Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.567684 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8efb81bd-5ea0-4eb0-98fd-96186da0e309","Type":"ContainerDied","Data":"8eb7f288cede9c16b3a95321773805f5d314d41dfe059af803f87d100ceea993"} Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.567703 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8efb81bd-5ea0-4eb0-98fd-96186da0e309","Type":"ContainerDied","Data":"8d9df5b9e120f7dea0f909a1397749b73772acbc6b389dff442d822c3de45ec5"} Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.567723 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.569773 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333","Type":"ContainerStarted","Data":"cf5b5d2b7329be387392f7842cc1d4e0c7f25b0b378eb5c2f233071394bb22e4"} Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.569939 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.572120 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5df1fa3a-f74c-4f25-ade2-3cb472d02546","Type":"ContainerDied","Data":"f833c63cae21661d00659de07239350741f17f25513247a119f488d8e6c6adac"} Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.572210 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.582333 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efb81bd-5ea0-4eb0-98fd-96186da0e309-combined-ca-bundle\") pod \"8efb81bd-5ea0-4eb0-98fd-96186da0e309\" (UID: \"8efb81bd-5ea0-4eb0-98fd-96186da0e309\") " Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.582637 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efb81bd-5ea0-4eb0-98fd-96186da0e309-config-data\") pod \"8efb81bd-5ea0-4eb0-98fd-96186da0e309\" (UID: \"8efb81bd-5ea0-4eb0-98fd-96186da0e309\") " Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.582783 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldl5r\" (UniqueName: \"kubernetes.io/projected/8efb81bd-5ea0-4eb0-98fd-96186da0e309-kube-api-access-ldl5r\") pod \"8efb81bd-5ea0-4eb0-98fd-96186da0e309\" (UID: \"8efb81bd-5ea0-4eb0-98fd-96186da0e309\") " Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.583358 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.583444 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5df1fa3a-f74c-4f25-ade2-3cb472d02546-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.590882 4955 scope.go:117] "RemoveContainer" containerID="2b66ee8e4e2efa1da40bb07ff47d1cb0c4298d600cd9624e3e7f72e3bb8a4659" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.593906 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8efb81bd-5ea0-4eb0-98fd-96186da0e309-kube-api-access-ldl5r" (OuterVolumeSpecName: "kube-api-access-ldl5r") pod "8efb81bd-5ea0-4eb0-98fd-96186da0e309" (UID: "8efb81bd-5ea0-4eb0-98fd-96186da0e309"). InnerVolumeSpecName "kube-api-access-ldl5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.614832 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.354996329 podStartE2EDuration="3.614812001s" podCreationTimestamp="2026-02-02 13:21:21 +0000 UTC" firstStartedPulling="2026-02-02 13:21:22.398433689 +0000 UTC m=+1133.310770139" lastFinishedPulling="2026-02-02 13:21:23.658249361 +0000 UTC m=+1134.570585811" observedRunningTime="2026-02-02 13:21:24.608701692 +0000 UTC m=+1135.521038142" watchObservedRunningTime="2026-02-02 13:21:24.614812001 +0000 UTC m=+1135.527148461" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.621189 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8efb81bd-5ea0-4eb0-98fd-96186da0e309-config-data" (OuterVolumeSpecName: "config-data") pod "8efb81bd-5ea0-4eb0-98fd-96186da0e309" (UID: "8efb81bd-5ea0-4eb0-98fd-96186da0e309"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.630908 4955 scope.go:117] "RemoveContainer" containerID="8eb7f288cede9c16b3a95321773805f5d314d41dfe059af803f87d100ceea993" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.647614 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.663219 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8efb81bd-5ea0-4eb0-98fd-96186da0e309-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8efb81bd-5ea0-4eb0-98fd-96186da0e309" (UID: "8efb81bd-5ea0-4eb0-98fd-96186da0e309"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.696952 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.704112 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldl5r\" (UniqueName: \"kubernetes.io/projected/8efb81bd-5ea0-4eb0-98fd-96186da0e309-kube-api-access-ldl5r\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.704245 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efb81bd-5ea0-4eb0-98fd-96186da0e309-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.704263 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efb81bd-5ea0-4eb0-98fd-96186da0e309-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.721662 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.730744 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.740076 4955 scope.go:117] "RemoveContainer" containerID="8eb7f288cede9c16b3a95321773805f5d314d41dfe059af803f87d100ceea993" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.740950 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:21:24 crc kubenswrapper[4955]: E0202 13:21:24.741474 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df1fa3a-f74c-4f25-ade2-3cb472d02546" containerName="proxy-httpd" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.741504 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df1fa3a-f74c-4f25-ade2-3cb472d02546" containerName="proxy-httpd" Feb 02 13:21:24 crc kubenswrapper[4955]: E0202 13:21:24.741522 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18864d22-515b-4b82-befe-d73dbcf192b9" containerName="nova-metadata-log" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.741532 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="18864d22-515b-4b82-befe-d73dbcf192b9" containerName="nova-metadata-log" Feb 02 13:21:24 crc kubenswrapper[4955]: E0202 13:21:24.741549 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8efb81bd-5ea0-4eb0-98fd-96186da0e309" containerName="nova-scheduler-scheduler" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.741561 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="8efb81bd-5ea0-4eb0-98fd-96186da0e309" containerName="nova-scheduler-scheduler" Feb 02 13:21:24 crc kubenswrapper[4955]: E0202 13:21:24.741607 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18864d22-515b-4b82-befe-d73dbcf192b9" containerName="nova-metadata-metadata" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.741617 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="18864d22-515b-4b82-befe-d73dbcf192b9" containerName="nova-metadata-metadata" Feb 02 13:21:24 crc kubenswrapper[4955]: E0202 13:21:24.741636 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df1fa3a-f74c-4f25-ade2-3cb472d02546" containerName="ceilometer-notification-agent" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.741645 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df1fa3a-f74c-4f25-ade2-3cb472d02546" containerName="ceilometer-notification-agent" Feb 02 13:21:24 crc kubenswrapper[4955]: E0202 13:21:24.741660 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df1fa3a-f74c-4f25-ade2-3cb472d02546" containerName="sg-core" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.741667 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df1fa3a-f74c-4f25-ade2-3cb472d02546" containerName="sg-core" Feb 02 13:21:24 crc kubenswrapper[4955]: E0202 13:21:24.741686 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df1fa3a-f74c-4f25-ade2-3cb472d02546" containerName="ceilometer-central-agent" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.741694 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df1fa3a-f74c-4f25-ade2-3cb472d02546" containerName="ceilometer-central-agent" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.741918 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="5df1fa3a-f74c-4f25-ade2-3cb472d02546" containerName="ceilometer-notification-agent" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.741939 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="5df1fa3a-f74c-4f25-ade2-3cb472d02546" containerName="ceilometer-central-agent" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.741957 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="5df1fa3a-f74c-4f25-ade2-3cb472d02546" containerName="proxy-httpd" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.741972 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="5df1fa3a-f74c-4f25-ade2-3cb472d02546" containerName="sg-core" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.741987 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="8efb81bd-5ea0-4eb0-98fd-96186da0e309" containerName="nova-scheduler-scheduler" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.742003 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="18864d22-515b-4b82-befe-d73dbcf192b9" containerName="nova-metadata-metadata" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.742016 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="18864d22-515b-4b82-befe-d73dbcf192b9" containerName="nova-metadata-log" Feb 02 13:21:24 crc kubenswrapper[4955]: E0202 13:21:24.743181 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eb7f288cede9c16b3a95321773805f5d314d41dfe059af803f87d100ceea993\": container with ID starting with 8eb7f288cede9c16b3a95321773805f5d314d41dfe059af803f87d100ceea993 not found: ID does not exist" containerID="8eb7f288cede9c16b3a95321773805f5d314d41dfe059af803f87d100ceea993" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.743217 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eb7f288cede9c16b3a95321773805f5d314d41dfe059af803f87d100ceea993"} err="failed to get container status \"8eb7f288cede9c16b3a95321773805f5d314d41dfe059af803f87d100ceea993\": rpc error: code = NotFound desc = could not find container \"8eb7f288cede9c16b3a95321773805f5d314d41dfe059af803f87d100ceea993\": container with ID starting with 8eb7f288cede9c16b3a95321773805f5d314d41dfe059af803f87d100ceea993 not found: ID does not exist" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.743240 4955 scope.go:117] "RemoveContainer" containerID="a7f4e63c76cd65dbdeb0abcf919fdd4b5415c6beb0d938ec0c10ef8f0d6a21bf" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.745661 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.751048 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.751189 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.751287 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.752315 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.754966 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.758445 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.758685 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.769825 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.784015 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.789385 4955 scope.go:117] "RemoveContainer" containerID="4b5f02ce076d438b552f28fed3c979903347281445e0a1b2544039c8aced7cf3" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.806437 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/814695e5-ccd0-42ab-b6f8-a924bdfc330d-log-httpd\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.806511 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/814695e5-ccd0-42ab-b6f8-a924bdfc330d-run-httpd\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.806547 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.806690 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.806711 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-config-data\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.806975 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-272qh\" (UniqueName: \"kubernetes.io/projected/b5c3a4ff-d989-4604-9515-619124f0b5f5-kube-api-access-272qh\") pod \"nova-metadata-0\" (UID: \"b5c3a4ff-d989-4604-9515-619124f0b5f5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.807068 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-scripts\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.807185 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5c3a4ff-d989-4604-9515-619124f0b5f5-logs\") pod \"nova-metadata-0\" (UID: \"b5c3a4ff-d989-4604-9515-619124f0b5f5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.807246 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5c3a4ff-d989-4604-9515-619124f0b5f5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b5c3a4ff-d989-4604-9515-619124f0b5f5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.807275 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmxw4\" (UniqueName: \"kubernetes.io/projected/814695e5-ccd0-42ab-b6f8-a924bdfc330d-kube-api-access-mmxw4\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.807318 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.807387 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5c3a4ff-d989-4604-9515-619124f0b5f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b5c3a4ff-d989-4604-9515-619124f0b5f5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.807423 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5c3a4ff-d989-4604-9515-619124f0b5f5-config-data\") pod \"nova-metadata-0\" (UID: \"b5c3a4ff-d989-4604-9515-619124f0b5f5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.810734 4955 scope.go:117] "RemoveContainer" containerID="9abf72807a12aaf083e4cc8e89156551292ddc0ced25247c78379f5930aa711d" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.832927 4955 scope.go:117] "RemoveContainer" containerID="7d4bcf1def66b6c781f24fe3fa56ffd15e491ad548115023c60e6a8016d83e26" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.905010 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.909436 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/814695e5-ccd0-42ab-b6f8-a924bdfc330d-run-httpd\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.909507 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.909549 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.909595 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-config-data\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.909654 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-272qh\" (UniqueName: \"kubernetes.io/projected/b5c3a4ff-d989-4604-9515-619124f0b5f5-kube-api-access-272qh\") pod \"nova-metadata-0\" (UID: \"b5c3a4ff-d989-4604-9515-619124f0b5f5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.909696 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-scripts\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.909746 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5c3a4ff-d989-4604-9515-619124f0b5f5-logs\") pod \"nova-metadata-0\" (UID: \"b5c3a4ff-d989-4604-9515-619124f0b5f5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.909771 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5c3a4ff-d989-4604-9515-619124f0b5f5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b5c3a4ff-d989-4604-9515-619124f0b5f5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.909790 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmxw4\" (UniqueName: \"kubernetes.io/projected/814695e5-ccd0-42ab-b6f8-a924bdfc330d-kube-api-access-mmxw4\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.909813 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.909834 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5c3a4ff-d989-4604-9515-619124f0b5f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b5c3a4ff-d989-4604-9515-619124f0b5f5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.909851 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5c3a4ff-d989-4604-9515-619124f0b5f5-config-data\") pod \"nova-metadata-0\" (UID: \"b5c3a4ff-d989-4604-9515-619124f0b5f5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.909901 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/814695e5-ccd0-42ab-b6f8-a924bdfc330d-log-httpd\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.910385 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/814695e5-ccd0-42ab-b6f8-a924bdfc330d-log-httpd\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.910693 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/814695e5-ccd0-42ab-b6f8-a924bdfc330d-run-httpd\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.911656 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5c3a4ff-d989-4604-9515-619124f0b5f5-logs\") pod \"nova-metadata-0\" (UID: \"b5c3a4ff-d989-4604-9515-619124f0b5f5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.914795 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.917541 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-config-data\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.919015 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.919337 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.919982 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-scripts\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.921182 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5c3a4ff-d989-4604-9515-619124f0b5f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b5c3a4ff-d989-4604-9515-619124f0b5f5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.922840 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5c3a4ff-d989-4604-9515-619124f0b5f5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b5c3a4ff-d989-4604-9515-619124f0b5f5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.923054 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5c3a4ff-d989-4604-9515-619124f0b5f5-config-data\") pod \"nova-metadata-0\" (UID: \"b5c3a4ff-d989-4604-9515-619124f0b5f5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.925505 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.928836 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.930041 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.932300 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.945536 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmxw4\" (UniqueName: \"kubernetes.io/projected/814695e5-ccd0-42ab-b6f8-a924bdfc330d-kube-api-access-mmxw4\") pod \"ceilometer-0\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " pod="openstack/ceilometer-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.948138 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-272qh\" (UniqueName: \"kubernetes.io/projected/b5c3a4ff-d989-4604-9515-619124f0b5f5-kube-api-access-272qh\") pod \"nova-metadata-0\" (UID: \"b5c3a4ff-d989-4604-9515-619124f0b5f5\") " pod="openstack/nova-metadata-0" Feb 02 13:21:24 crc kubenswrapper[4955]: I0202 13:21:24.991136 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:21:25 crc kubenswrapper[4955]: I0202 13:21:25.011962 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmwgq\" (UniqueName: \"kubernetes.io/projected/644dbcee-4378-48ce-9a1c-e2c7369db99a-kube-api-access-kmwgq\") pod \"nova-scheduler-0\" (UID: \"644dbcee-4378-48ce-9a1c-e2c7369db99a\") " pod="openstack/nova-scheduler-0" Feb 02 13:21:25 crc kubenswrapper[4955]: I0202 13:21:25.012424 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644dbcee-4378-48ce-9a1c-e2c7369db99a-config-data\") pod \"nova-scheduler-0\" (UID: \"644dbcee-4378-48ce-9a1c-e2c7369db99a\") " pod="openstack/nova-scheduler-0" Feb 02 13:21:25 crc kubenswrapper[4955]: I0202 13:21:25.012622 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644dbcee-4378-48ce-9a1c-e2c7369db99a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"644dbcee-4378-48ce-9a1c-e2c7369db99a\") " pod="openstack/nova-scheduler-0" Feb 02 13:21:25 crc kubenswrapper[4955]: I0202 13:21:25.084178 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:21:25 crc kubenswrapper[4955]: I0202 13:21:25.097829 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:21:25 crc kubenswrapper[4955]: I0202 13:21:25.117001 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644dbcee-4378-48ce-9a1c-e2c7369db99a-config-data\") pod \"nova-scheduler-0\" (UID: \"644dbcee-4378-48ce-9a1c-e2c7369db99a\") " pod="openstack/nova-scheduler-0" Feb 02 13:21:25 crc kubenswrapper[4955]: I0202 13:21:25.117154 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644dbcee-4378-48ce-9a1c-e2c7369db99a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"644dbcee-4378-48ce-9a1c-e2c7369db99a\") " pod="openstack/nova-scheduler-0" Feb 02 13:21:25 crc kubenswrapper[4955]: I0202 13:21:25.117237 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmwgq\" (UniqueName: \"kubernetes.io/projected/644dbcee-4378-48ce-9a1c-e2c7369db99a-kube-api-access-kmwgq\") pod \"nova-scheduler-0\" (UID: \"644dbcee-4378-48ce-9a1c-e2c7369db99a\") " pod="openstack/nova-scheduler-0" Feb 02 13:21:25 crc kubenswrapper[4955]: I0202 13:21:25.120209 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644dbcee-4378-48ce-9a1c-e2c7369db99a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"644dbcee-4378-48ce-9a1c-e2c7369db99a\") " pod="openstack/nova-scheduler-0" Feb 02 13:21:25 crc kubenswrapper[4955]: I0202 13:21:25.121127 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644dbcee-4378-48ce-9a1c-e2c7369db99a-config-data\") pod \"nova-scheduler-0\" (UID: \"644dbcee-4378-48ce-9a1c-e2c7369db99a\") " pod="openstack/nova-scheduler-0" Feb 02 13:21:25 crc kubenswrapper[4955]: I0202 13:21:25.144414 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmwgq\" (UniqueName: \"kubernetes.io/projected/644dbcee-4378-48ce-9a1c-e2c7369db99a-kube-api-access-kmwgq\") pod \"nova-scheduler-0\" (UID: \"644dbcee-4378-48ce-9a1c-e2c7369db99a\") " pod="openstack/nova-scheduler-0" Feb 02 13:21:25 crc kubenswrapper[4955]: I0202 13:21:25.345615 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 13:21:25 crc kubenswrapper[4955]: I0202 13:21:25.628629 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:21:25 crc kubenswrapper[4955]: I0202 13:21:25.688941 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:21:25 crc kubenswrapper[4955]: W0202 13:21:25.699735 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5c3a4ff_d989_4604_9515_619124f0b5f5.slice/crio-ef601f5228e9487474ad6d87df3ac8fdda1c7c3e2a5bbd12066161c1af5818a7 WatchSource:0}: Error finding container ef601f5228e9487474ad6d87df3ac8fdda1c7c3e2a5bbd12066161c1af5818a7: Status 404 returned error can't find the container with id ef601f5228e9487474ad6d87df3ac8fdda1c7c3e2a5bbd12066161c1af5818a7 Feb 02 13:21:25 crc kubenswrapper[4955]: I0202 13:21:25.734374 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18864d22-515b-4b82-befe-d73dbcf192b9" path="/var/lib/kubelet/pods/18864d22-515b-4b82-befe-d73dbcf192b9/volumes" Feb 02 13:21:25 crc kubenswrapper[4955]: I0202 13:21:25.735225 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5df1fa3a-f74c-4f25-ade2-3cb472d02546" path="/var/lib/kubelet/pods/5df1fa3a-f74c-4f25-ade2-3cb472d02546/volumes" Feb 02 13:21:25 crc kubenswrapper[4955]: I0202 13:21:25.736109 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8efb81bd-5ea0-4eb0-98fd-96186da0e309" path="/var/lib/kubelet/pods/8efb81bd-5ea0-4eb0-98fd-96186da0e309/volumes" Feb 02 13:21:25 crc kubenswrapper[4955]: I0202 13:21:25.827033 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:21:25 crc kubenswrapper[4955]: W0202 13:21:25.844731 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod644dbcee_4378_48ce_9a1c_e2c7369db99a.slice/crio-c7eebe91c0879d1d844e77b72deb030316086bafdac9ce392432631414cc2b72 WatchSource:0}: Error finding container c7eebe91c0879d1d844e77b72deb030316086bafdac9ce392432631414cc2b72: Status 404 returned error can't find the container with id c7eebe91c0879d1d844e77b72deb030316086bafdac9ce392432631414cc2b72 Feb 02 13:21:26 crc kubenswrapper[4955]: I0202 13:21:26.606192 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"814695e5-ccd0-42ab-b6f8-a924bdfc330d","Type":"ContainerStarted","Data":"bde029fadd45af3d23809a2d92f145c19c7d0f9a9717ec3ffdc6ca1386b53bd0"} Feb 02 13:21:26 crc kubenswrapper[4955]: I0202 13:21:26.607757 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"644dbcee-4378-48ce-9a1c-e2c7369db99a","Type":"ContainerStarted","Data":"ede8c4003018ba0c393daa36005390e8ea924cf5e1783dfed132565299139573"} Feb 02 13:21:26 crc kubenswrapper[4955]: I0202 13:21:26.607831 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"644dbcee-4378-48ce-9a1c-e2c7369db99a","Type":"ContainerStarted","Data":"c7eebe91c0879d1d844e77b72deb030316086bafdac9ce392432631414cc2b72"} Feb 02 13:21:26 crc kubenswrapper[4955]: I0202 13:21:26.609222 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b5c3a4ff-d989-4604-9515-619124f0b5f5","Type":"ContainerStarted","Data":"3abdfde4582ba0ca7ebd2d87bb260e0f0d96fc28b042e84206ee7f736681fc53"} Feb 02 13:21:26 crc kubenswrapper[4955]: I0202 13:21:26.609266 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b5c3a4ff-d989-4604-9515-619124f0b5f5","Type":"ContainerStarted","Data":"ef601f5228e9487474ad6d87df3ac8fdda1c7c3e2a5bbd12066161c1af5818a7"} Feb 02 13:21:26 crc kubenswrapper[4955]: I0202 13:21:26.632120 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.632100179 podStartE2EDuration="2.632100179s" podCreationTimestamp="2026-02-02 13:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:21:26.624373022 +0000 UTC m=+1137.536709472" watchObservedRunningTime="2026-02-02 13:21:26.632100179 +0000 UTC m=+1137.544436629" Feb 02 13:21:27 crc kubenswrapper[4955]: I0202 13:21:27.621409 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b5c3a4ff-d989-4604-9515-619124f0b5f5","Type":"ContainerStarted","Data":"1497887e98a04b2392b55e2ff340e703fcb901bfec9e1c7fb297a09ae435f7c9"} Feb 02 13:21:27 crc kubenswrapper[4955]: I0202 13:21:27.624987 4955 generic.go:334] "Generic (PLEG): container finished" podID="83014d43-e06b-4d7f-ae20-35da704852c7" containerID="84b964b59b521654a67042365d8fb861f4908ffe5a2df45c9b53b735ba4a8ab6" exitCode=0 Feb 02 13:21:27 crc kubenswrapper[4955]: I0202 13:21:27.625737 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83014d43-e06b-4d7f-ae20-35da704852c7","Type":"ContainerDied","Data":"84b964b59b521654a67042365d8fb861f4908ffe5a2df45c9b53b735ba4a8ab6"} Feb 02 13:21:27 crc kubenswrapper[4955]: I0202 13:21:27.625848 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83014d43-e06b-4d7f-ae20-35da704852c7","Type":"ContainerDied","Data":"787bdee69f49228560733961d5a3c6f334bf4066ce100b828bb6fd2181317ecc"} Feb 02 13:21:27 crc kubenswrapper[4955]: I0202 13:21:27.625927 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="787bdee69f49228560733961d5a3c6f334bf4066ce100b828bb6fd2181317ecc" Feb 02 13:21:27 crc kubenswrapper[4955]: I0202 13:21:27.661999 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.66198044 podStartE2EDuration="3.66198044s" podCreationTimestamp="2026-02-02 13:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:21:27.658916466 +0000 UTC m=+1138.571252916" watchObservedRunningTime="2026-02-02 13:21:27.66198044 +0000 UTC m=+1138.574316890" Feb 02 13:21:27 crc kubenswrapper[4955]: I0202 13:21:27.792848 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:21:27 crc kubenswrapper[4955]: I0202 13:21:27.893495 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83014d43-e06b-4d7f-ae20-35da704852c7-combined-ca-bundle\") pod \"83014d43-e06b-4d7f-ae20-35da704852c7\" (UID: \"83014d43-e06b-4d7f-ae20-35da704852c7\") " Feb 02 13:21:27 crc kubenswrapper[4955]: I0202 13:21:27.893700 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngssb\" (UniqueName: \"kubernetes.io/projected/83014d43-e06b-4d7f-ae20-35da704852c7-kube-api-access-ngssb\") pod \"83014d43-e06b-4d7f-ae20-35da704852c7\" (UID: \"83014d43-e06b-4d7f-ae20-35da704852c7\") " Feb 02 13:21:27 crc kubenswrapper[4955]: I0202 13:21:27.893740 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83014d43-e06b-4d7f-ae20-35da704852c7-config-data\") pod \"83014d43-e06b-4d7f-ae20-35da704852c7\" (UID: \"83014d43-e06b-4d7f-ae20-35da704852c7\") " Feb 02 13:21:27 crc kubenswrapper[4955]: I0202 13:21:27.894013 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83014d43-e06b-4d7f-ae20-35da704852c7-logs\") pod \"83014d43-e06b-4d7f-ae20-35da704852c7\" (UID: \"83014d43-e06b-4d7f-ae20-35da704852c7\") " Feb 02 13:21:27 crc kubenswrapper[4955]: I0202 13:21:27.894505 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83014d43-e06b-4d7f-ae20-35da704852c7-logs" (OuterVolumeSpecName: "logs") pod "83014d43-e06b-4d7f-ae20-35da704852c7" (UID: "83014d43-e06b-4d7f-ae20-35da704852c7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:21:27 crc kubenswrapper[4955]: I0202 13:21:27.894702 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83014d43-e06b-4d7f-ae20-35da704852c7-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:27 crc kubenswrapper[4955]: I0202 13:21:27.897640 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83014d43-e06b-4d7f-ae20-35da704852c7-kube-api-access-ngssb" (OuterVolumeSpecName: "kube-api-access-ngssb") pod "83014d43-e06b-4d7f-ae20-35da704852c7" (UID: "83014d43-e06b-4d7f-ae20-35da704852c7"). InnerVolumeSpecName "kube-api-access-ngssb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:21:27 crc kubenswrapper[4955]: I0202 13:21:27.921860 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83014d43-e06b-4d7f-ae20-35da704852c7-config-data" (OuterVolumeSpecName: "config-data") pod "83014d43-e06b-4d7f-ae20-35da704852c7" (UID: "83014d43-e06b-4d7f-ae20-35da704852c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:27 crc kubenswrapper[4955]: I0202 13:21:27.922610 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83014d43-e06b-4d7f-ae20-35da704852c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83014d43-e06b-4d7f-ae20-35da704852c7" (UID: "83014d43-e06b-4d7f-ae20-35da704852c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:27 crc kubenswrapper[4955]: I0202 13:21:27.996156 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83014d43-e06b-4d7f-ae20-35da704852c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:27 crc kubenswrapper[4955]: I0202 13:21:27.996186 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngssb\" (UniqueName: \"kubernetes.io/projected/83014d43-e06b-4d7f-ae20-35da704852c7-kube-api-access-ngssb\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:27 crc kubenswrapper[4955]: I0202 13:21:27.996195 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83014d43-e06b-4d7f-ae20-35da704852c7-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.633797 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"814695e5-ccd0-42ab-b6f8-a924bdfc330d","Type":"ContainerStarted","Data":"76b5b85b395d8b421cc40e79f6e406dbc7fc4721a363c7c9f8d2f1444117943a"} Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.634692 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.679623 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.688102 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.695348 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 13:21:28 crc kubenswrapper[4955]: E0202 13:21:28.695756 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83014d43-e06b-4d7f-ae20-35da704852c7" containerName="nova-api-log" Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.695770 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="83014d43-e06b-4d7f-ae20-35da704852c7" containerName="nova-api-log" Feb 02 13:21:28 crc kubenswrapper[4955]: E0202 13:21:28.695795 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83014d43-e06b-4d7f-ae20-35da704852c7" containerName="nova-api-api" Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.695802 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="83014d43-e06b-4d7f-ae20-35da704852c7" containerName="nova-api-api" Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.695973 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="83014d43-e06b-4d7f-ae20-35da704852c7" containerName="nova-api-log" Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.695991 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="83014d43-e06b-4d7f-ae20-35da704852c7" containerName="nova-api-api" Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.696902 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.703658 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.714167 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.811897 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/518d5b92-e7c7-49b2-947f-caed5758afcd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"518d5b92-e7c7-49b2-947f-caed5758afcd\") " pod="openstack/nova-api-0" Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.811955 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/518d5b92-e7c7-49b2-947f-caed5758afcd-logs\") pod \"nova-api-0\" (UID: \"518d5b92-e7c7-49b2-947f-caed5758afcd\") " pod="openstack/nova-api-0" Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.811998 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnmv8\" (UniqueName: \"kubernetes.io/projected/518d5b92-e7c7-49b2-947f-caed5758afcd-kube-api-access-wnmv8\") pod \"nova-api-0\" (UID: \"518d5b92-e7c7-49b2-947f-caed5758afcd\") " pod="openstack/nova-api-0" Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.812081 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/518d5b92-e7c7-49b2-947f-caed5758afcd-config-data\") pod \"nova-api-0\" (UID: \"518d5b92-e7c7-49b2-947f-caed5758afcd\") " pod="openstack/nova-api-0" Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.913672 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/518d5b92-e7c7-49b2-947f-caed5758afcd-config-data\") pod \"nova-api-0\" (UID: \"518d5b92-e7c7-49b2-947f-caed5758afcd\") " pod="openstack/nova-api-0" Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.913802 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/518d5b92-e7c7-49b2-947f-caed5758afcd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"518d5b92-e7c7-49b2-947f-caed5758afcd\") " pod="openstack/nova-api-0" Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.913855 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/518d5b92-e7c7-49b2-947f-caed5758afcd-logs\") pod \"nova-api-0\" (UID: \"518d5b92-e7c7-49b2-947f-caed5758afcd\") " pod="openstack/nova-api-0" Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.913896 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnmv8\" (UniqueName: \"kubernetes.io/projected/518d5b92-e7c7-49b2-947f-caed5758afcd-kube-api-access-wnmv8\") pod \"nova-api-0\" (UID: \"518d5b92-e7c7-49b2-947f-caed5758afcd\") " pod="openstack/nova-api-0" Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.914294 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/518d5b92-e7c7-49b2-947f-caed5758afcd-logs\") pod \"nova-api-0\" (UID: \"518d5b92-e7c7-49b2-947f-caed5758afcd\") " pod="openstack/nova-api-0" Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.917973 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/518d5b92-e7c7-49b2-947f-caed5758afcd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"518d5b92-e7c7-49b2-947f-caed5758afcd\") " pod="openstack/nova-api-0" Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.923761 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/518d5b92-e7c7-49b2-947f-caed5758afcd-config-data\") pod \"nova-api-0\" (UID: \"518d5b92-e7c7-49b2-947f-caed5758afcd\") " pod="openstack/nova-api-0" Feb 02 13:21:28 crc kubenswrapper[4955]: I0202 13:21:28.935970 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnmv8\" (UniqueName: \"kubernetes.io/projected/518d5b92-e7c7-49b2-947f-caed5758afcd-kube-api-access-wnmv8\") pod \"nova-api-0\" (UID: \"518d5b92-e7c7-49b2-947f-caed5758afcd\") " pod="openstack/nova-api-0" Feb 02 13:21:29 crc kubenswrapper[4955]: I0202 13:21:29.013493 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:21:29 crc kubenswrapper[4955]: W0202 13:21:29.438409 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod518d5b92_e7c7_49b2_947f_caed5758afcd.slice/crio-ec70e88b22f87158340696b2a008d429e4676bf083f2e8f1e1c2072721a90dfb WatchSource:0}: Error finding container ec70e88b22f87158340696b2a008d429e4676bf083f2e8f1e1c2072721a90dfb: Status 404 returned error can't find the container with id ec70e88b22f87158340696b2a008d429e4676bf083f2e8f1e1c2072721a90dfb Feb 02 13:21:29 crc kubenswrapper[4955]: I0202 13:21:29.441876 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:21:29 crc kubenswrapper[4955]: I0202 13:21:29.643135 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"814695e5-ccd0-42ab-b6f8-a924bdfc330d","Type":"ContainerStarted","Data":"6b7f86e28796931dec3381b4efa3b2132a49ab517ae3db06c9b6334c2292b32a"} Feb 02 13:21:29 crc kubenswrapper[4955]: I0202 13:21:29.644634 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"518d5b92-e7c7-49b2-947f-caed5758afcd","Type":"ContainerStarted","Data":"b15a21d03bba38a957e12b581b97a415ffc262dfa9dca229f07979547793ba40"} Feb 02 13:21:29 crc kubenswrapper[4955]: I0202 13:21:29.644682 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"518d5b92-e7c7-49b2-947f-caed5758afcd","Type":"ContainerStarted","Data":"ec70e88b22f87158340696b2a008d429e4676bf083f2e8f1e1c2072721a90dfb"} Feb 02 13:21:29 crc kubenswrapper[4955]: I0202 13:21:29.740453 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83014d43-e06b-4d7f-ae20-35da704852c7" path="/var/lib/kubelet/pods/83014d43-e06b-4d7f-ae20-35da704852c7/volumes" Feb 02 13:21:30 crc kubenswrapper[4955]: I0202 13:21:30.100956 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 13:21:30 crc kubenswrapper[4955]: I0202 13:21:30.101272 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 13:21:30 crc kubenswrapper[4955]: I0202 13:21:30.346673 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 13:21:30 crc kubenswrapper[4955]: I0202 13:21:30.655067 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"814695e5-ccd0-42ab-b6f8-a924bdfc330d","Type":"ContainerStarted","Data":"d6f2178a84eb0b4a6388118304fa8778c69d6d7ac256aac4d3a627e7b175b250"} Feb 02 13:21:30 crc kubenswrapper[4955]: I0202 13:21:30.658122 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"518d5b92-e7c7-49b2-947f-caed5758afcd","Type":"ContainerStarted","Data":"1846176302acca23d31f310e5bb38e0f8951fbc4980c4fd256d01bf295203695"} Feb 02 13:21:30 crc kubenswrapper[4955]: I0202 13:21:30.690519 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.690501667 podStartE2EDuration="2.690501667s" podCreationTimestamp="2026-02-02 13:21:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:21:30.681284753 +0000 UTC m=+1141.593621223" watchObservedRunningTime="2026-02-02 13:21:30.690501667 +0000 UTC m=+1141.602838117" Feb 02 13:21:31 crc kubenswrapper[4955]: I0202 13:21:31.906298 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 13:21:33 crc kubenswrapper[4955]: I0202 13:21:33.017130 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:21:33 crc kubenswrapper[4955]: I0202 13:21:33.017674 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:21:33 crc kubenswrapper[4955]: I0202 13:21:33.017715 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:21:33 crc kubenswrapper[4955]: I0202 13:21:33.018360 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"602bc594f34404ff8d3d47bc3c3720ccccc87bdb99931ff5e26638726c7febe5"} pod="openshift-machine-config-operator/machine-config-daemon-6l62h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:21:33 crc kubenswrapper[4955]: I0202 13:21:33.018409 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" containerID="cri-o://602bc594f34404ff8d3d47bc3c3720ccccc87bdb99931ff5e26638726c7febe5" gracePeriod=600 Feb 02 13:21:33 crc kubenswrapper[4955]: I0202 13:21:33.686051 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"814695e5-ccd0-42ab-b6f8-a924bdfc330d","Type":"ContainerStarted","Data":"c20f7afd8b6ac7bc05cf790fe33264f6f1d94e4659d68ec8380e2822d4f483c4"} Feb 02 13:21:33 crc kubenswrapper[4955]: I0202 13:21:33.686373 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 13:21:33 crc kubenswrapper[4955]: I0202 13:21:33.688930 4955 generic.go:334] "Generic (PLEG): container finished" podID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerID="602bc594f34404ff8d3d47bc3c3720ccccc87bdb99931ff5e26638726c7febe5" exitCode=0 Feb 02 13:21:33 crc kubenswrapper[4955]: I0202 13:21:33.688962 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerDied","Data":"602bc594f34404ff8d3d47bc3c3720ccccc87bdb99931ff5e26638726c7febe5"} Feb 02 13:21:33 crc kubenswrapper[4955]: I0202 13:21:33.688985 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerStarted","Data":"de8f7b51852eedd7c330a4f405023f03d69b18c14dc6e890327bc3a4eab66f6a"} Feb 02 13:21:33 crc kubenswrapper[4955]: I0202 13:21:33.689005 4955 scope.go:117] "RemoveContainer" containerID="2cd1b5f598a7c72d423d2d4f07c02704decf6f32b1b11d2eaf56ffcde03b7e1b" Feb 02 13:21:33 crc kubenswrapper[4955]: I0202 13:21:33.715615 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.921619169 podStartE2EDuration="9.715596051s" podCreationTimestamp="2026-02-02 13:21:24 +0000 UTC" firstStartedPulling="2026-02-02 13:21:25.641054393 +0000 UTC m=+1136.553390843" lastFinishedPulling="2026-02-02 13:21:32.435031275 +0000 UTC m=+1143.347367725" observedRunningTime="2026-02-02 13:21:33.705184358 +0000 UTC m=+1144.617520808" watchObservedRunningTime="2026-02-02 13:21:33.715596051 +0000 UTC m=+1144.627932501" Feb 02 13:21:35 crc kubenswrapper[4955]: I0202 13:21:35.098206 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 13:21:35 crc kubenswrapper[4955]: I0202 13:21:35.099604 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 13:21:35 crc kubenswrapper[4955]: I0202 13:21:35.345785 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 13:21:35 crc kubenswrapper[4955]: I0202 13:21:35.371995 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 13:21:35 crc kubenswrapper[4955]: I0202 13:21:35.751749 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 13:21:36 crc kubenswrapper[4955]: I0202 13:21:36.114746 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b5c3a4ff-d989-4604-9515-619124f0b5f5" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 13:21:36 crc kubenswrapper[4955]: I0202 13:21:36.114746 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b5c3a4ff-d989-4604-9515-619124f0b5f5" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 13:21:39 crc kubenswrapper[4955]: I0202 13:21:39.014748 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 13:21:39 crc kubenswrapper[4955]: I0202 13:21:39.015095 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 13:21:40 crc kubenswrapper[4955]: I0202 13:21:40.096738 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="518d5b92-e7c7-49b2-947f-caed5758afcd" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 13:21:40 crc kubenswrapper[4955]: I0202 13:21:40.096849 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="518d5b92-e7c7-49b2-947f-caed5758afcd" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 13:21:40 crc kubenswrapper[4955]: I0202 13:21:40.759072 4955 generic.go:334] "Generic (PLEG): container finished" podID="4b673660-45c3-419e-af6f-66cb08d272e0" containerID="f2ecabd0323a48a97418b5e95039ea3e8010eb1ff6b95df11e39612af5422926" exitCode=0 Feb 02 13:21:40 crc kubenswrapper[4955]: I0202 13:21:40.759156 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q5zzn" event={"ID":"4b673660-45c3-419e-af6f-66cb08d272e0","Type":"ContainerDied","Data":"f2ecabd0323a48a97418b5e95039ea3e8010eb1ff6b95df11e39612af5422926"} Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.122258 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q5zzn" Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.271205 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b673660-45c3-419e-af6f-66cb08d272e0-config-data\") pod \"4b673660-45c3-419e-af6f-66cb08d272e0\" (UID: \"4b673660-45c3-419e-af6f-66cb08d272e0\") " Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.271314 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b673660-45c3-419e-af6f-66cb08d272e0-combined-ca-bundle\") pod \"4b673660-45c3-419e-af6f-66cb08d272e0\" (UID: \"4b673660-45c3-419e-af6f-66cb08d272e0\") " Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.271335 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdgjc\" (UniqueName: \"kubernetes.io/projected/4b673660-45c3-419e-af6f-66cb08d272e0-kube-api-access-vdgjc\") pod \"4b673660-45c3-419e-af6f-66cb08d272e0\" (UID: \"4b673660-45c3-419e-af6f-66cb08d272e0\") " Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.271376 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b673660-45c3-419e-af6f-66cb08d272e0-scripts\") pod \"4b673660-45c3-419e-af6f-66cb08d272e0\" (UID: \"4b673660-45c3-419e-af6f-66cb08d272e0\") " Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.277128 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b673660-45c3-419e-af6f-66cb08d272e0-scripts" (OuterVolumeSpecName: "scripts") pod "4b673660-45c3-419e-af6f-66cb08d272e0" (UID: "4b673660-45c3-419e-af6f-66cb08d272e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.277984 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b673660-45c3-419e-af6f-66cb08d272e0-kube-api-access-vdgjc" (OuterVolumeSpecName: "kube-api-access-vdgjc") pod "4b673660-45c3-419e-af6f-66cb08d272e0" (UID: "4b673660-45c3-419e-af6f-66cb08d272e0"). InnerVolumeSpecName "kube-api-access-vdgjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.301914 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b673660-45c3-419e-af6f-66cb08d272e0-config-data" (OuterVolumeSpecName: "config-data") pod "4b673660-45c3-419e-af6f-66cb08d272e0" (UID: "4b673660-45c3-419e-af6f-66cb08d272e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.303709 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b673660-45c3-419e-af6f-66cb08d272e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b673660-45c3-419e-af6f-66cb08d272e0" (UID: "4b673660-45c3-419e-af6f-66cb08d272e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.373322 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b673660-45c3-419e-af6f-66cb08d272e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.373378 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdgjc\" (UniqueName: \"kubernetes.io/projected/4b673660-45c3-419e-af6f-66cb08d272e0-kube-api-access-vdgjc\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.373394 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b673660-45c3-419e-af6f-66cb08d272e0-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.373407 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b673660-45c3-419e-af6f-66cb08d272e0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.776816 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q5zzn" event={"ID":"4b673660-45c3-419e-af6f-66cb08d272e0","Type":"ContainerDied","Data":"0847bfe8cc0c2bbedb4ee0439185124b7973a90aa35c636c0a2ad74c0e7165ae"} Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.777075 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0847bfe8cc0c2bbedb4ee0439185124b7973a90aa35c636c0a2ad74c0e7165ae" Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.776857 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q5zzn" Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.862252 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 13:21:42 crc kubenswrapper[4955]: E0202 13:21:42.862999 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b673660-45c3-419e-af6f-66cb08d272e0" containerName="nova-cell1-conductor-db-sync" Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.863028 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b673660-45c3-419e-af6f-66cb08d272e0" containerName="nova-cell1-conductor-db-sync" Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.863243 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b673660-45c3-419e-af6f-66cb08d272e0" containerName="nova-cell1-conductor-db-sync" Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.864212 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.872678 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.899542 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.900694 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da5793f-fd0c-4e87-9e72-dbd21447e050-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3da5793f-fd0c-4e87-9e72-dbd21447e050\") " pod="openstack/nova-cell1-conductor-0" Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.900914 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ppm8\" (UniqueName: \"kubernetes.io/projected/3da5793f-fd0c-4e87-9e72-dbd21447e050-kube-api-access-8ppm8\") pod \"nova-cell1-conductor-0\" (UID: \"3da5793f-fd0c-4e87-9e72-dbd21447e050\") " pod="openstack/nova-cell1-conductor-0" Feb 02 13:21:42 crc kubenswrapper[4955]: I0202 13:21:42.901019 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da5793f-fd0c-4e87-9e72-dbd21447e050-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3da5793f-fd0c-4e87-9e72-dbd21447e050\") " pod="openstack/nova-cell1-conductor-0" Feb 02 13:21:43 crc kubenswrapper[4955]: I0202 13:21:43.002013 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da5793f-fd0c-4e87-9e72-dbd21447e050-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3da5793f-fd0c-4e87-9e72-dbd21447e050\") " pod="openstack/nova-cell1-conductor-0" Feb 02 13:21:43 crc kubenswrapper[4955]: I0202 13:21:43.002123 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da5793f-fd0c-4e87-9e72-dbd21447e050-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3da5793f-fd0c-4e87-9e72-dbd21447e050\") " pod="openstack/nova-cell1-conductor-0" Feb 02 13:21:43 crc kubenswrapper[4955]: I0202 13:21:43.002201 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ppm8\" (UniqueName: \"kubernetes.io/projected/3da5793f-fd0c-4e87-9e72-dbd21447e050-kube-api-access-8ppm8\") pod \"nova-cell1-conductor-0\" (UID: \"3da5793f-fd0c-4e87-9e72-dbd21447e050\") " pod="openstack/nova-cell1-conductor-0" Feb 02 13:21:43 crc kubenswrapper[4955]: I0202 13:21:43.006471 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da5793f-fd0c-4e87-9e72-dbd21447e050-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3da5793f-fd0c-4e87-9e72-dbd21447e050\") " pod="openstack/nova-cell1-conductor-0" Feb 02 13:21:43 crc kubenswrapper[4955]: I0202 13:21:43.006516 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da5793f-fd0c-4e87-9e72-dbd21447e050-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3da5793f-fd0c-4e87-9e72-dbd21447e050\") " pod="openstack/nova-cell1-conductor-0" Feb 02 13:21:43 crc kubenswrapper[4955]: I0202 13:21:43.021745 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ppm8\" (UniqueName: \"kubernetes.io/projected/3da5793f-fd0c-4e87-9e72-dbd21447e050-kube-api-access-8ppm8\") pod \"nova-cell1-conductor-0\" (UID: \"3da5793f-fd0c-4e87-9e72-dbd21447e050\") " pod="openstack/nova-cell1-conductor-0" Feb 02 13:21:43 crc kubenswrapper[4955]: I0202 13:21:43.113758 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 13:21:43 crc kubenswrapper[4955]: I0202 13:21:43.538171 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 13:21:43 crc kubenswrapper[4955]: W0202 13:21:43.546706 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3da5793f_fd0c_4e87_9e72_dbd21447e050.slice/crio-862a0a11fa2f57c09655609a211308b45e4f454138f7de261f35b5f304497ed8 WatchSource:0}: Error finding container 862a0a11fa2f57c09655609a211308b45e4f454138f7de261f35b5f304497ed8: Status 404 returned error can't find the container with id 862a0a11fa2f57c09655609a211308b45e4f454138f7de261f35b5f304497ed8 Feb 02 13:21:43 crc kubenswrapper[4955]: I0202 13:21:43.786225 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3da5793f-fd0c-4e87-9e72-dbd21447e050","Type":"ContainerStarted","Data":"862a0a11fa2f57c09655609a211308b45e4f454138f7de261f35b5f304497ed8"} Feb 02 13:21:44 crc kubenswrapper[4955]: I0202 13:21:44.796746 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3da5793f-fd0c-4e87-9e72-dbd21447e050","Type":"ContainerStarted","Data":"eaf805483903336c594e1457d6eb4adcf75c6780c65f76ccf9ad1d412895ae75"} Feb 02 13:21:44 crc kubenswrapper[4955]: I0202 13:21:44.797254 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 02 13:21:44 crc kubenswrapper[4955]: I0202 13:21:44.815476 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.815454678 podStartE2EDuration="2.815454678s" podCreationTimestamp="2026-02-02 13:21:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:21:44.811657146 +0000 UTC m=+1155.723993586" watchObservedRunningTime="2026-02-02 13:21:44.815454678 +0000 UTC m=+1155.727791128" Feb 02 13:21:45 crc kubenswrapper[4955]: I0202 13:21:45.103368 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 13:21:45 crc kubenswrapper[4955]: I0202 13:21:45.103484 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 13:21:45 crc kubenswrapper[4955]: I0202 13:21:45.110667 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 13:21:45 crc kubenswrapper[4955]: I0202 13:21:45.112752 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 13:21:46 crc kubenswrapper[4955]: I0202 13:21:46.777229 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:46 crc kubenswrapper[4955]: I0202 13:21:46.815173 4955 generic.go:334] "Generic (PLEG): container finished" podID="5f617af2-0a24-4beb-a9d3-76928d6bc9e1" containerID="e774ce2caac6d61131c7cf41e0b4e260f2f08321750db456b9756bb15287f270" exitCode=137 Feb 02 13:21:46 crc kubenswrapper[4955]: I0202 13:21:46.815224 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:46 crc kubenswrapper[4955]: I0202 13:21:46.815243 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5f617af2-0a24-4beb-a9d3-76928d6bc9e1","Type":"ContainerDied","Data":"e774ce2caac6d61131c7cf41e0b4e260f2f08321750db456b9756bb15287f270"} Feb 02 13:21:46 crc kubenswrapper[4955]: I0202 13:21:46.815850 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5f617af2-0a24-4beb-a9d3-76928d6bc9e1","Type":"ContainerDied","Data":"7956db3826a1a3d3f88a92159a9d280d307023f5e7c3f01cb27ac70020cb20f1"} Feb 02 13:21:46 crc kubenswrapper[4955]: I0202 13:21:46.815889 4955 scope.go:117] "RemoveContainer" containerID="e774ce2caac6d61131c7cf41e0b4e260f2f08321750db456b9756bb15287f270" Feb 02 13:21:46 crc kubenswrapper[4955]: I0202 13:21:46.834881 4955 scope.go:117] "RemoveContainer" containerID="e774ce2caac6d61131c7cf41e0b4e260f2f08321750db456b9756bb15287f270" Feb 02 13:21:46 crc kubenswrapper[4955]: E0202 13:21:46.835428 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e774ce2caac6d61131c7cf41e0b4e260f2f08321750db456b9756bb15287f270\": container with ID starting with e774ce2caac6d61131c7cf41e0b4e260f2f08321750db456b9756bb15287f270 not found: ID does not exist" containerID="e774ce2caac6d61131c7cf41e0b4e260f2f08321750db456b9756bb15287f270" Feb 02 13:21:46 crc kubenswrapper[4955]: I0202 13:21:46.835469 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e774ce2caac6d61131c7cf41e0b4e260f2f08321750db456b9756bb15287f270"} err="failed to get container status \"e774ce2caac6d61131c7cf41e0b4e260f2f08321750db456b9756bb15287f270\": rpc error: code = NotFound desc = could not find container \"e774ce2caac6d61131c7cf41e0b4e260f2f08321750db456b9756bb15287f270\": container with ID starting with e774ce2caac6d61131c7cf41e0b4e260f2f08321750db456b9756bb15287f270 not found: ID does not exist" Feb 02 13:21:46 crc kubenswrapper[4955]: I0202 13:21:46.974082 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f617af2-0a24-4beb-a9d3-76928d6bc9e1-config-data\") pod \"5f617af2-0a24-4beb-a9d3-76928d6bc9e1\" (UID: \"5f617af2-0a24-4beb-a9d3-76928d6bc9e1\") " Feb 02 13:21:46 crc kubenswrapper[4955]: I0202 13:21:46.974138 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f617af2-0a24-4beb-a9d3-76928d6bc9e1-combined-ca-bundle\") pod \"5f617af2-0a24-4beb-a9d3-76928d6bc9e1\" (UID: \"5f617af2-0a24-4beb-a9d3-76928d6bc9e1\") " Feb 02 13:21:46 crc kubenswrapper[4955]: I0202 13:21:46.974202 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88h6s\" (UniqueName: \"kubernetes.io/projected/5f617af2-0a24-4beb-a9d3-76928d6bc9e1-kube-api-access-88h6s\") pod \"5f617af2-0a24-4beb-a9d3-76928d6bc9e1\" (UID: \"5f617af2-0a24-4beb-a9d3-76928d6bc9e1\") " Feb 02 13:21:46 crc kubenswrapper[4955]: I0202 13:21:46.978982 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f617af2-0a24-4beb-a9d3-76928d6bc9e1-kube-api-access-88h6s" (OuterVolumeSpecName: "kube-api-access-88h6s") pod "5f617af2-0a24-4beb-a9d3-76928d6bc9e1" (UID: "5f617af2-0a24-4beb-a9d3-76928d6bc9e1"). InnerVolumeSpecName "kube-api-access-88h6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.000490 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f617af2-0a24-4beb-a9d3-76928d6bc9e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f617af2-0a24-4beb-a9d3-76928d6bc9e1" (UID: "5f617af2-0a24-4beb-a9d3-76928d6bc9e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.005464 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f617af2-0a24-4beb-a9d3-76928d6bc9e1-config-data" (OuterVolumeSpecName: "config-data") pod "5f617af2-0a24-4beb-a9d3-76928d6bc9e1" (UID: "5f617af2-0a24-4beb-a9d3-76928d6bc9e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.078308 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f617af2-0a24-4beb-a9d3-76928d6bc9e1-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.078377 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f617af2-0a24-4beb-a9d3-76928d6bc9e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.078401 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88h6s\" (UniqueName: \"kubernetes.io/projected/5f617af2-0a24-4beb-a9d3-76928d6bc9e1-kube-api-access-88h6s\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.155098 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.166313 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.228785 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 13:21:47 crc kubenswrapper[4955]: E0202 13:21:47.229168 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f617af2-0a24-4beb-a9d3-76928d6bc9e1" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.229184 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f617af2-0a24-4beb-a9d3-76928d6bc9e1" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.229384 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f617af2-0a24-4beb-a9d3-76928d6bc9e1" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.230011 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.233888 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.236713 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.237070 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.241078 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.280897 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.280991 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.281128 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.281177 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgpbc\" (UniqueName: \"kubernetes.io/projected/6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce-kube-api-access-qgpbc\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.281203 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.383280 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.383391 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.383435 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgpbc\" (UniqueName: \"kubernetes.io/projected/6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce-kube-api-access-qgpbc\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.383457 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.383485 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.387616 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.387632 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.395147 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.395253 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.398843 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgpbc\" (UniqueName: \"kubernetes.io/projected/6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce-kube-api-access-qgpbc\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.545884 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:47 crc kubenswrapper[4955]: I0202 13:21:47.728741 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f617af2-0a24-4beb-a9d3-76928d6bc9e1" path="/var/lib/kubelet/pods/5f617af2-0a24-4beb-a9d3-76928d6bc9e1/volumes" Feb 02 13:21:48 crc kubenswrapper[4955]: I0202 13:21:48.025539 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 13:21:48 crc kubenswrapper[4955]: W0202 13:21:48.028335 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ba337f5_4c96_4bcd_ad8e_9e12ebc857ce.slice/crio-5b83fc5518ec93e5da327215951011bc6f242a91723e96f26afdb7c86e8c4918 WatchSource:0}: Error finding container 5b83fc5518ec93e5da327215951011bc6f242a91723e96f26afdb7c86e8c4918: Status 404 returned error can't find the container with id 5b83fc5518ec93e5da327215951011bc6f242a91723e96f26afdb7c86e8c4918 Feb 02 13:21:48 crc kubenswrapper[4955]: I0202 13:21:48.141336 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 02 13:21:48 crc kubenswrapper[4955]: I0202 13:21:48.835942 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce","Type":"ContainerStarted","Data":"aae2ee87e2c16abfb83c97af18a2aa46712ac0b4b19a5c2b213204394e01ec67"} Feb 02 13:21:48 crc kubenswrapper[4955]: I0202 13:21:48.836230 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce","Type":"ContainerStarted","Data":"5b83fc5518ec93e5da327215951011bc6f242a91723e96f26afdb7c86e8c4918"} Feb 02 13:21:48 crc kubenswrapper[4955]: I0202 13:21:48.852897 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.852876336 podStartE2EDuration="1.852876336s" podCreationTimestamp="2026-02-02 13:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:21:48.850531119 +0000 UTC m=+1159.762867599" watchObservedRunningTime="2026-02-02 13:21:48.852876336 +0000 UTC m=+1159.765212786" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.019748 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.019823 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.020264 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.020287 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.024148 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.024197 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.211494 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5"] Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.220268 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.227814 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vx77\" (UniqueName: \"kubernetes.io/projected/0ff9b0c5-ac10-49d8-8876-f605852f490d-kube-api-access-2vx77\") pod \"dnsmasq-dns-6b7bbf7cf9-4qvx5\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.227879 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-4qvx5\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.227949 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-4qvx5\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.227981 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-config\") pod \"dnsmasq-dns-6b7bbf7cf9-4qvx5\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.228220 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-4qvx5\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.228289 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-4qvx5\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.242966 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5"] Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.330848 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-4qvx5\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.330919 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-config\") pod \"dnsmasq-dns-6b7bbf7cf9-4qvx5\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.330988 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-4qvx5\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.331014 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-4qvx5\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.331068 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vx77\" (UniqueName: \"kubernetes.io/projected/0ff9b0c5-ac10-49d8-8876-f605852f490d-kube-api-access-2vx77\") pod \"dnsmasq-dns-6b7bbf7cf9-4qvx5\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.331098 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-4qvx5\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.331836 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-config\") pod \"dnsmasq-dns-6b7bbf7cf9-4qvx5\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.331941 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-4qvx5\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.331993 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-4qvx5\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.332413 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-4qvx5\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.332625 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-4qvx5\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.349182 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vx77\" (UniqueName: \"kubernetes.io/projected/0ff9b0c5-ac10-49d8-8876-f605852f490d-kube-api-access-2vx77\") pod \"dnsmasq-dns-6b7bbf7cf9-4qvx5\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:21:49 crc kubenswrapper[4955]: I0202 13:21:49.549370 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:21:50 crc kubenswrapper[4955]: I0202 13:21:50.109906 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5"] Feb 02 13:21:50 crc kubenswrapper[4955]: I0202 13:21:50.856869 4955 generic.go:334] "Generic (PLEG): container finished" podID="0ff9b0c5-ac10-49d8-8876-f605852f490d" containerID="765149ad9f9697994a8d85fd36e047189b0ce314d690370fc10e6f9971633193" exitCode=0 Feb 02 13:21:50 crc kubenswrapper[4955]: I0202 13:21:50.856941 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" event={"ID":"0ff9b0c5-ac10-49d8-8876-f605852f490d","Type":"ContainerDied","Data":"765149ad9f9697994a8d85fd36e047189b0ce314d690370fc10e6f9971633193"} Feb 02 13:21:50 crc kubenswrapper[4955]: I0202 13:21:50.857335 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" event={"ID":"0ff9b0c5-ac10-49d8-8876-f605852f490d","Type":"ContainerStarted","Data":"ec1dbf3910ae5d5c45344e761c99dbea9d9b99f524b280b5b1c6f25a5feebab8"} Feb 02 13:21:51 crc kubenswrapper[4955]: I0202 13:21:51.491500 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:21:51 crc kubenswrapper[4955]: I0202 13:21:51.534053 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:21:51 crc kubenswrapper[4955]: I0202 13:21:51.534398 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="814695e5-ccd0-42ab-b6f8-a924bdfc330d" containerName="ceilometer-central-agent" containerID="cri-o://76b5b85b395d8b421cc40e79f6e406dbc7fc4721a363c7c9f8d2f1444117943a" gracePeriod=30 Feb 02 13:21:51 crc kubenswrapper[4955]: I0202 13:21:51.534532 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="814695e5-ccd0-42ab-b6f8-a924bdfc330d" containerName="proxy-httpd" containerID="cri-o://c20f7afd8b6ac7bc05cf790fe33264f6f1d94e4659d68ec8380e2822d4f483c4" gracePeriod=30 Feb 02 13:21:51 crc kubenswrapper[4955]: I0202 13:21:51.534580 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="814695e5-ccd0-42ab-b6f8-a924bdfc330d" containerName="ceilometer-notification-agent" containerID="cri-o://6b7f86e28796931dec3381b4efa3b2132a49ab517ae3db06c9b6334c2292b32a" gracePeriod=30 Feb 02 13:21:51 crc kubenswrapper[4955]: I0202 13:21:51.534618 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="814695e5-ccd0-42ab-b6f8-a924bdfc330d" containerName="sg-core" containerID="cri-o://d6f2178a84eb0b4a6388118304fa8778c69d6d7ac256aac4d3a627e7b175b250" gracePeriod=30 Feb 02 13:21:51 crc kubenswrapper[4955]: I0202 13:21:51.542901 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="814695e5-ccd0-42ab-b6f8-a924bdfc330d" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.204:3000/\": read tcp 10.217.0.2:54150->10.217.0.204:3000: read: connection reset by peer" Feb 02 13:21:51 crc kubenswrapper[4955]: I0202 13:21:51.869764 4955 generic.go:334] "Generic (PLEG): container finished" podID="814695e5-ccd0-42ab-b6f8-a924bdfc330d" containerID="c20f7afd8b6ac7bc05cf790fe33264f6f1d94e4659d68ec8380e2822d4f483c4" exitCode=0 Feb 02 13:21:51 crc kubenswrapper[4955]: I0202 13:21:51.869807 4955 generic.go:334] "Generic (PLEG): container finished" podID="814695e5-ccd0-42ab-b6f8-a924bdfc330d" containerID="d6f2178a84eb0b4a6388118304fa8778c69d6d7ac256aac4d3a627e7b175b250" exitCode=2 Feb 02 13:21:51 crc kubenswrapper[4955]: I0202 13:21:51.869819 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"814695e5-ccd0-42ab-b6f8-a924bdfc330d","Type":"ContainerDied","Data":"c20f7afd8b6ac7bc05cf790fe33264f6f1d94e4659d68ec8380e2822d4f483c4"} Feb 02 13:21:51 crc kubenswrapper[4955]: I0202 13:21:51.869871 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"814695e5-ccd0-42ab-b6f8-a924bdfc330d","Type":"ContainerDied","Data":"d6f2178a84eb0b4a6388118304fa8778c69d6d7ac256aac4d3a627e7b175b250"} Feb 02 13:21:51 crc kubenswrapper[4955]: I0202 13:21:51.871982 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" event={"ID":"0ff9b0c5-ac10-49d8-8876-f605852f490d","Type":"ContainerStarted","Data":"91d8d9e707711fcae426e8b170cb80981d57c808662fb514ea1b081fb8151a98"} Feb 02 13:21:51 crc kubenswrapper[4955]: I0202 13:21:51.872092 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="518d5b92-e7c7-49b2-947f-caed5758afcd" containerName="nova-api-log" containerID="cri-o://b15a21d03bba38a957e12b581b97a415ffc262dfa9dca229f07979547793ba40" gracePeriod=30 Feb 02 13:21:51 crc kubenswrapper[4955]: I0202 13:21:51.872143 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:21:51 crc kubenswrapper[4955]: I0202 13:21:51.872266 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="518d5b92-e7c7-49b2-947f-caed5758afcd" containerName="nova-api-api" containerID="cri-o://1846176302acca23d31f310e5bb38e0f8951fbc4980c4fd256d01bf295203695" gracePeriod=30 Feb 02 13:21:51 crc kubenswrapper[4955]: I0202 13:21:51.906267 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" podStartSLOduration=2.906249426 podStartE2EDuration="2.906249426s" podCreationTimestamp="2026-02-02 13:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:21:51.904486253 +0000 UTC m=+1162.816822703" watchObservedRunningTime="2026-02-02 13:21:51.906249426 +0000 UTC m=+1162.818585876" Feb 02 13:21:52 crc kubenswrapper[4955]: I0202 13:21:52.546894 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:52 crc kubenswrapper[4955]: I0202 13:21:52.885379 4955 generic.go:334] "Generic (PLEG): container finished" podID="518d5b92-e7c7-49b2-947f-caed5758afcd" containerID="b15a21d03bba38a957e12b581b97a415ffc262dfa9dca229f07979547793ba40" exitCode=143 Feb 02 13:21:52 crc kubenswrapper[4955]: I0202 13:21:52.885447 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"518d5b92-e7c7-49b2-947f-caed5758afcd","Type":"ContainerDied","Data":"b15a21d03bba38a957e12b581b97a415ffc262dfa9dca229f07979547793ba40"} Feb 02 13:21:52 crc kubenswrapper[4955]: I0202 13:21:52.889354 4955 generic.go:334] "Generic (PLEG): container finished" podID="814695e5-ccd0-42ab-b6f8-a924bdfc330d" containerID="76b5b85b395d8b421cc40e79f6e406dbc7fc4721a363c7c9f8d2f1444117943a" exitCode=0 Feb 02 13:21:52 crc kubenswrapper[4955]: I0202 13:21:52.889432 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"814695e5-ccd0-42ab-b6f8-a924bdfc330d","Type":"ContainerDied","Data":"76b5b85b395d8b421cc40e79f6e406dbc7fc4721a363c7c9f8d2f1444117943a"} Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.085904 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="814695e5-ccd0-42ab-b6f8-a924bdfc330d" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.204:3000/\": dial tcp 10.217.0.204:3000: connect: connection refused" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.512208 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.651014 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/518d5b92-e7c7-49b2-947f-caed5758afcd-config-data\") pod \"518d5b92-e7c7-49b2-947f-caed5758afcd\" (UID: \"518d5b92-e7c7-49b2-947f-caed5758afcd\") " Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.651110 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/518d5b92-e7c7-49b2-947f-caed5758afcd-logs\") pod \"518d5b92-e7c7-49b2-947f-caed5758afcd\" (UID: \"518d5b92-e7c7-49b2-947f-caed5758afcd\") " Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.651148 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnmv8\" (UniqueName: \"kubernetes.io/projected/518d5b92-e7c7-49b2-947f-caed5758afcd-kube-api-access-wnmv8\") pod \"518d5b92-e7c7-49b2-947f-caed5758afcd\" (UID: \"518d5b92-e7c7-49b2-947f-caed5758afcd\") " Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.651181 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/518d5b92-e7c7-49b2-947f-caed5758afcd-combined-ca-bundle\") pod \"518d5b92-e7c7-49b2-947f-caed5758afcd\" (UID: \"518d5b92-e7c7-49b2-947f-caed5758afcd\") " Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.651839 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/518d5b92-e7c7-49b2-947f-caed5758afcd-logs" (OuterVolumeSpecName: "logs") pod "518d5b92-e7c7-49b2-947f-caed5758afcd" (UID: "518d5b92-e7c7-49b2-947f-caed5758afcd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.670017 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/518d5b92-e7c7-49b2-947f-caed5758afcd-kube-api-access-wnmv8" (OuterVolumeSpecName: "kube-api-access-wnmv8") pod "518d5b92-e7c7-49b2-947f-caed5758afcd" (UID: "518d5b92-e7c7-49b2-947f-caed5758afcd"). InnerVolumeSpecName "kube-api-access-wnmv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.691999 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/518d5b92-e7c7-49b2-947f-caed5758afcd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "518d5b92-e7c7-49b2-947f-caed5758afcd" (UID: "518d5b92-e7c7-49b2-947f-caed5758afcd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.726296 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/518d5b92-e7c7-49b2-947f-caed5758afcd-config-data" (OuterVolumeSpecName: "config-data") pod "518d5b92-e7c7-49b2-947f-caed5758afcd" (UID: "518d5b92-e7c7-49b2-947f-caed5758afcd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.753649 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/518d5b92-e7c7-49b2-947f-caed5758afcd-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.753675 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/518d5b92-e7c7-49b2-947f-caed5758afcd-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.753684 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnmv8\" (UniqueName: \"kubernetes.io/projected/518d5b92-e7c7-49b2-947f-caed5758afcd-kube-api-access-wnmv8\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.753694 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/518d5b92-e7c7-49b2-947f-caed5758afcd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.862850 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.921036 4955 generic.go:334] "Generic (PLEG): container finished" podID="518d5b92-e7c7-49b2-947f-caed5758afcd" containerID="1846176302acca23d31f310e5bb38e0f8951fbc4980c4fd256d01bf295203695" exitCode=0 Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.921223 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"518d5b92-e7c7-49b2-947f-caed5758afcd","Type":"ContainerDied","Data":"1846176302acca23d31f310e5bb38e0f8951fbc4980c4fd256d01bf295203695"} Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.921246 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.921269 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"518d5b92-e7c7-49b2-947f-caed5758afcd","Type":"ContainerDied","Data":"ec70e88b22f87158340696b2a008d429e4676bf083f2e8f1e1c2072721a90dfb"} Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.921289 4955 scope.go:117] "RemoveContainer" containerID="1846176302acca23d31f310e5bb38e0f8951fbc4980c4fd256d01bf295203695" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.926706 4955 generic.go:334] "Generic (PLEG): container finished" podID="814695e5-ccd0-42ab-b6f8-a924bdfc330d" containerID="6b7f86e28796931dec3381b4efa3b2132a49ab517ae3db06c9b6334c2292b32a" exitCode=0 Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.926745 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"814695e5-ccd0-42ab-b6f8-a924bdfc330d","Type":"ContainerDied","Data":"6b7f86e28796931dec3381b4efa3b2132a49ab517ae3db06c9b6334c2292b32a"} Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.926770 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"814695e5-ccd0-42ab-b6f8-a924bdfc330d","Type":"ContainerDied","Data":"bde029fadd45af3d23809a2d92f145c19c7d0f9a9717ec3ffdc6ca1386b53bd0"} Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.926831 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.944903 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.945458 4955 scope.go:117] "RemoveContainer" containerID="b15a21d03bba38a957e12b581b97a415ffc262dfa9dca229f07979547793ba40" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.953320 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.957276 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-scripts\") pod \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.957360 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmxw4\" (UniqueName: \"kubernetes.io/projected/814695e5-ccd0-42ab-b6f8-a924bdfc330d-kube-api-access-mmxw4\") pod \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.957412 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/814695e5-ccd0-42ab-b6f8-a924bdfc330d-run-httpd\") pod \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.957517 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-combined-ca-bundle\") pod \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.957551 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/814695e5-ccd0-42ab-b6f8-a924bdfc330d-log-httpd\") pod \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.957610 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-sg-core-conf-yaml\") pod \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.957658 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-ceilometer-tls-certs\") pod \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.957725 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-config-data\") pod \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\" (UID: \"814695e5-ccd0-42ab-b6f8-a924bdfc330d\") " Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.963765 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/814695e5-ccd0-42ab-b6f8-a924bdfc330d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "814695e5-ccd0-42ab-b6f8-a924bdfc330d" (UID: "814695e5-ccd0-42ab-b6f8-a924bdfc330d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.963958 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/814695e5-ccd0-42ab-b6f8-a924bdfc330d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "814695e5-ccd0-42ab-b6f8-a924bdfc330d" (UID: "814695e5-ccd0-42ab-b6f8-a924bdfc330d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.968694 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-scripts" (OuterVolumeSpecName: "scripts") pod "814695e5-ccd0-42ab-b6f8-a924bdfc330d" (UID: "814695e5-ccd0-42ab-b6f8-a924bdfc330d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.971135 4955 scope.go:117] "RemoveContainer" containerID="1846176302acca23d31f310e5bb38e0f8951fbc4980c4fd256d01bf295203695" Feb 02 13:21:55 crc kubenswrapper[4955]: E0202 13:21:55.971613 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1846176302acca23d31f310e5bb38e0f8951fbc4980c4fd256d01bf295203695\": container with ID starting with 1846176302acca23d31f310e5bb38e0f8951fbc4980c4fd256d01bf295203695 not found: ID does not exist" containerID="1846176302acca23d31f310e5bb38e0f8951fbc4980c4fd256d01bf295203695" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.971662 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1846176302acca23d31f310e5bb38e0f8951fbc4980c4fd256d01bf295203695"} err="failed to get container status \"1846176302acca23d31f310e5bb38e0f8951fbc4980c4fd256d01bf295203695\": rpc error: code = NotFound desc = could not find container \"1846176302acca23d31f310e5bb38e0f8951fbc4980c4fd256d01bf295203695\": container with ID starting with 1846176302acca23d31f310e5bb38e0f8951fbc4980c4fd256d01bf295203695 not found: ID does not exist" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.971692 4955 scope.go:117] "RemoveContainer" containerID="b15a21d03bba38a957e12b581b97a415ffc262dfa9dca229f07979547793ba40" Feb 02 13:21:55 crc kubenswrapper[4955]: E0202 13:21:55.972007 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b15a21d03bba38a957e12b581b97a415ffc262dfa9dca229f07979547793ba40\": container with ID starting with b15a21d03bba38a957e12b581b97a415ffc262dfa9dca229f07979547793ba40 not found: ID does not exist" containerID="b15a21d03bba38a957e12b581b97a415ffc262dfa9dca229f07979547793ba40" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.972028 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b15a21d03bba38a957e12b581b97a415ffc262dfa9dca229f07979547793ba40"} err="failed to get container status \"b15a21d03bba38a957e12b581b97a415ffc262dfa9dca229f07979547793ba40\": rpc error: code = NotFound desc = could not find container \"b15a21d03bba38a957e12b581b97a415ffc262dfa9dca229f07979547793ba40\": container with ID starting with b15a21d03bba38a957e12b581b97a415ffc262dfa9dca229f07979547793ba40 not found: ID does not exist" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.972043 4955 scope.go:117] "RemoveContainer" containerID="c20f7afd8b6ac7bc05cf790fe33264f6f1d94e4659d68ec8380e2822d4f483c4" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.973026 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/814695e5-ccd0-42ab-b6f8-a924bdfc330d-kube-api-access-mmxw4" (OuterVolumeSpecName: "kube-api-access-mmxw4") pod "814695e5-ccd0-42ab-b6f8-a924bdfc330d" (UID: "814695e5-ccd0-42ab-b6f8-a924bdfc330d"). InnerVolumeSpecName "kube-api-access-mmxw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.973333 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 13:21:55 crc kubenswrapper[4955]: E0202 13:21:55.973834 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814695e5-ccd0-42ab-b6f8-a924bdfc330d" containerName="proxy-httpd" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.973855 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="814695e5-ccd0-42ab-b6f8-a924bdfc330d" containerName="proxy-httpd" Feb 02 13:21:55 crc kubenswrapper[4955]: E0202 13:21:55.973870 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="518d5b92-e7c7-49b2-947f-caed5758afcd" containerName="nova-api-api" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.973877 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="518d5b92-e7c7-49b2-947f-caed5758afcd" containerName="nova-api-api" Feb 02 13:21:55 crc kubenswrapper[4955]: E0202 13:21:55.973891 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814695e5-ccd0-42ab-b6f8-a924bdfc330d" containerName="ceilometer-central-agent" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.973897 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="814695e5-ccd0-42ab-b6f8-a924bdfc330d" containerName="ceilometer-central-agent" Feb 02 13:21:55 crc kubenswrapper[4955]: E0202 13:21:55.973919 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="518d5b92-e7c7-49b2-947f-caed5758afcd" containerName="nova-api-log" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.973925 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="518d5b92-e7c7-49b2-947f-caed5758afcd" containerName="nova-api-log" Feb 02 13:21:55 crc kubenswrapper[4955]: E0202 13:21:55.973938 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814695e5-ccd0-42ab-b6f8-a924bdfc330d" containerName="ceilometer-notification-agent" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.973945 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="814695e5-ccd0-42ab-b6f8-a924bdfc330d" containerName="ceilometer-notification-agent" Feb 02 13:21:55 crc kubenswrapper[4955]: E0202 13:21:55.973954 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814695e5-ccd0-42ab-b6f8-a924bdfc330d" containerName="sg-core" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.973959 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="814695e5-ccd0-42ab-b6f8-a924bdfc330d" containerName="sg-core" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.974123 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="518d5b92-e7c7-49b2-947f-caed5758afcd" containerName="nova-api-api" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.974147 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="518d5b92-e7c7-49b2-947f-caed5758afcd" containerName="nova-api-log" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.974159 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="814695e5-ccd0-42ab-b6f8-a924bdfc330d" containerName="ceilometer-central-agent" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.974169 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="814695e5-ccd0-42ab-b6f8-a924bdfc330d" containerName="ceilometer-notification-agent" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.974176 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="814695e5-ccd0-42ab-b6f8-a924bdfc330d" containerName="sg-core" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.974183 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="814695e5-ccd0-42ab-b6f8-a924bdfc330d" containerName="proxy-httpd" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.975157 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.977558 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.977794 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.977909 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 13:21:55 crc kubenswrapper[4955]: I0202 13:21:55.986898 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.012770 4955 scope.go:117] "RemoveContainer" containerID="d6f2178a84eb0b4a6388118304fa8778c69d6d7ac256aac4d3a627e7b175b250" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.018150 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "814695e5-ccd0-42ab-b6f8-a924bdfc330d" (UID: "814695e5-ccd0-42ab-b6f8-a924bdfc330d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.034827 4955 scope.go:117] "RemoveContainer" containerID="6b7f86e28796931dec3381b4efa3b2132a49ab517ae3db06c9b6334c2292b32a" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.053254 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "814695e5-ccd0-42ab-b6f8-a924bdfc330d" (UID: "814695e5-ccd0-42ab-b6f8-a924bdfc330d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.054737 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "814695e5-ccd0-42ab-b6f8-a924bdfc330d" (UID: "814695e5-ccd0-42ab-b6f8-a924bdfc330d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.055633 4955 scope.go:117] "RemoveContainer" containerID="76b5b85b395d8b421cc40e79f6e406dbc7fc4721a363c7c9f8d2f1444117943a" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.060713 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.060749 4955 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/814695e5-ccd0-42ab-b6f8-a924bdfc330d-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.060764 4955 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.060777 4955 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.060790 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.060801 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmxw4\" (UniqueName: \"kubernetes.io/projected/814695e5-ccd0-42ab-b6f8-a924bdfc330d-kube-api-access-mmxw4\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.060812 4955 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/814695e5-ccd0-42ab-b6f8-a924bdfc330d-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.084759 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-config-data" (OuterVolumeSpecName: "config-data") pod "814695e5-ccd0-42ab-b6f8-a924bdfc330d" (UID: "814695e5-ccd0-42ab-b6f8-a924bdfc330d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.091712 4955 scope.go:117] "RemoveContainer" containerID="c20f7afd8b6ac7bc05cf790fe33264f6f1d94e4659d68ec8380e2822d4f483c4" Feb 02 13:21:56 crc kubenswrapper[4955]: E0202 13:21:56.092313 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c20f7afd8b6ac7bc05cf790fe33264f6f1d94e4659d68ec8380e2822d4f483c4\": container with ID starting with c20f7afd8b6ac7bc05cf790fe33264f6f1d94e4659d68ec8380e2822d4f483c4 not found: ID does not exist" containerID="c20f7afd8b6ac7bc05cf790fe33264f6f1d94e4659d68ec8380e2822d4f483c4" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.092373 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c20f7afd8b6ac7bc05cf790fe33264f6f1d94e4659d68ec8380e2822d4f483c4"} err="failed to get container status \"c20f7afd8b6ac7bc05cf790fe33264f6f1d94e4659d68ec8380e2822d4f483c4\": rpc error: code = NotFound desc = could not find container \"c20f7afd8b6ac7bc05cf790fe33264f6f1d94e4659d68ec8380e2822d4f483c4\": container with ID starting with c20f7afd8b6ac7bc05cf790fe33264f6f1d94e4659d68ec8380e2822d4f483c4 not found: ID does not exist" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.092407 4955 scope.go:117] "RemoveContainer" containerID="d6f2178a84eb0b4a6388118304fa8778c69d6d7ac256aac4d3a627e7b175b250" Feb 02 13:21:56 crc kubenswrapper[4955]: E0202 13:21:56.093795 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6f2178a84eb0b4a6388118304fa8778c69d6d7ac256aac4d3a627e7b175b250\": container with ID starting with d6f2178a84eb0b4a6388118304fa8778c69d6d7ac256aac4d3a627e7b175b250 not found: ID does not exist" containerID="d6f2178a84eb0b4a6388118304fa8778c69d6d7ac256aac4d3a627e7b175b250" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.093849 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6f2178a84eb0b4a6388118304fa8778c69d6d7ac256aac4d3a627e7b175b250"} err="failed to get container status \"d6f2178a84eb0b4a6388118304fa8778c69d6d7ac256aac4d3a627e7b175b250\": rpc error: code = NotFound desc = could not find container \"d6f2178a84eb0b4a6388118304fa8778c69d6d7ac256aac4d3a627e7b175b250\": container with ID starting with d6f2178a84eb0b4a6388118304fa8778c69d6d7ac256aac4d3a627e7b175b250 not found: ID does not exist" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.093881 4955 scope.go:117] "RemoveContainer" containerID="6b7f86e28796931dec3381b4efa3b2132a49ab517ae3db06c9b6334c2292b32a" Feb 02 13:21:56 crc kubenswrapper[4955]: E0202 13:21:56.094348 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b7f86e28796931dec3381b4efa3b2132a49ab517ae3db06c9b6334c2292b32a\": container with ID starting with 6b7f86e28796931dec3381b4efa3b2132a49ab517ae3db06c9b6334c2292b32a not found: ID does not exist" containerID="6b7f86e28796931dec3381b4efa3b2132a49ab517ae3db06c9b6334c2292b32a" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.094377 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b7f86e28796931dec3381b4efa3b2132a49ab517ae3db06c9b6334c2292b32a"} err="failed to get container status \"6b7f86e28796931dec3381b4efa3b2132a49ab517ae3db06c9b6334c2292b32a\": rpc error: code = NotFound desc = could not find container \"6b7f86e28796931dec3381b4efa3b2132a49ab517ae3db06c9b6334c2292b32a\": container with ID starting with 6b7f86e28796931dec3381b4efa3b2132a49ab517ae3db06c9b6334c2292b32a not found: ID does not exist" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.094396 4955 scope.go:117] "RemoveContainer" containerID="76b5b85b395d8b421cc40e79f6e406dbc7fc4721a363c7c9f8d2f1444117943a" Feb 02 13:21:56 crc kubenswrapper[4955]: E0202 13:21:56.094741 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76b5b85b395d8b421cc40e79f6e406dbc7fc4721a363c7c9f8d2f1444117943a\": container with ID starting with 76b5b85b395d8b421cc40e79f6e406dbc7fc4721a363c7c9f8d2f1444117943a not found: ID does not exist" containerID="76b5b85b395d8b421cc40e79f6e406dbc7fc4721a363c7c9f8d2f1444117943a" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.094773 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76b5b85b395d8b421cc40e79f6e406dbc7fc4721a363c7c9f8d2f1444117943a"} err="failed to get container status \"76b5b85b395d8b421cc40e79f6e406dbc7fc4721a363c7c9f8d2f1444117943a\": rpc error: code = NotFound desc = could not find container \"76b5b85b395d8b421cc40e79f6e406dbc7fc4721a363c7c9f8d2f1444117943a\": container with ID starting with 76b5b85b395d8b421cc40e79f6e406dbc7fc4721a363c7c9f8d2f1444117943a not found: ID does not exist" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.162688 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-public-tls-certs\") pod \"nova-api-0\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " pod="openstack/nova-api-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.162778 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf7cg\" (UniqueName: \"kubernetes.io/projected/1347061d-7385-40c9-9fa4-902dae558fac-kube-api-access-tf7cg\") pod \"nova-api-0\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " pod="openstack/nova-api-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.162896 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-config-data\") pod \"nova-api-0\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " pod="openstack/nova-api-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.162983 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " pod="openstack/nova-api-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.163034 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1347061d-7385-40c9-9fa4-902dae558fac-logs\") pod \"nova-api-0\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " pod="openstack/nova-api-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.163058 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " pod="openstack/nova-api-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.163155 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/814695e5-ccd0-42ab-b6f8-a924bdfc330d-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.260079 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.264699 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-config-data\") pod \"nova-api-0\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " pod="openstack/nova-api-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.264783 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " pod="openstack/nova-api-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.264823 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1347061d-7385-40c9-9fa4-902dae558fac-logs\") pod \"nova-api-0\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " pod="openstack/nova-api-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.264843 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " pod="openstack/nova-api-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.264871 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-public-tls-certs\") pod \"nova-api-0\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " pod="openstack/nova-api-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.264925 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf7cg\" (UniqueName: \"kubernetes.io/projected/1347061d-7385-40c9-9fa4-902dae558fac-kube-api-access-tf7cg\") pod \"nova-api-0\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " pod="openstack/nova-api-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.265434 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1347061d-7385-40c9-9fa4-902dae558fac-logs\") pod \"nova-api-0\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " pod="openstack/nova-api-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.268335 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " pod="openstack/nova-api-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.269402 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-public-tls-certs\") pod \"nova-api-0\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " pod="openstack/nova-api-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.271109 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " pod="openstack/nova-api-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.272356 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-config-data\") pod \"nova-api-0\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " pod="openstack/nova-api-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.295160 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.301149 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf7cg\" (UniqueName: \"kubernetes.io/projected/1347061d-7385-40c9-9fa4-902dae558fac-kube-api-access-tf7cg\") pod \"nova-api-0\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " pod="openstack/nova-api-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.307210 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.313648 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.316778 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.321970 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.325487 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.326159 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.326584 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.471347 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fbbb588-ff7e-4242-a370-8943bf57604e-log-httpd\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.471749 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fbbb588-ff7e-4242-a370-8943bf57604e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.471786 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fbbb588-ff7e-4242-a370-8943bf57604e-run-httpd\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.471815 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m678g\" (UniqueName: \"kubernetes.io/projected/4fbbb588-ff7e-4242-a370-8943bf57604e-kube-api-access-m678g\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.471839 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fbbb588-ff7e-4242-a370-8943bf57604e-config-data\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.471864 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fbbb588-ff7e-4242-a370-8943bf57604e-scripts\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.471887 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fbbb588-ff7e-4242-a370-8943bf57604e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.471905 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fbbb588-ff7e-4242-a370-8943bf57604e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.573118 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fbbb588-ff7e-4242-a370-8943bf57604e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.573182 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fbbb588-ff7e-4242-a370-8943bf57604e-run-httpd\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.573208 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m678g\" (UniqueName: \"kubernetes.io/projected/4fbbb588-ff7e-4242-a370-8943bf57604e-kube-api-access-m678g\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.573224 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fbbb588-ff7e-4242-a370-8943bf57604e-config-data\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.573249 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fbbb588-ff7e-4242-a370-8943bf57604e-scripts\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.573280 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fbbb588-ff7e-4242-a370-8943bf57604e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.573295 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fbbb588-ff7e-4242-a370-8943bf57604e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.573359 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fbbb588-ff7e-4242-a370-8943bf57604e-log-httpd\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.573937 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fbbb588-ff7e-4242-a370-8943bf57604e-log-httpd\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.574261 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fbbb588-ff7e-4242-a370-8943bf57604e-run-httpd\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.591217 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fbbb588-ff7e-4242-a370-8943bf57604e-config-data\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.594015 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fbbb588-ff7e-4242-a370-8943bf57604e-scripts\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.594365 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fbbb588-ff7e-4242-a370-8943bf57604e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.594972 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m678g\" (UniqueName: \"kubernetes.io/projected/4fbbb588-ff7e-4242-a370-8943bf57604e-kube-api-access-m678g\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.595493 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fbbb588-ff7e-4242-a370-8943bf57604e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.596196 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fbbb588-ff7e-4242-a370-8943bf57604e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4fbbb588-ff7e-4242-a370-8943bf57604e\") " pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.757340 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.833433 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:21:56 crc kubenswrapper[4955]: I0202 13:21:56.941396 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1347061d-7385-40c9-9fa4-902dae558fac","Type":"ContainerStarted","Data":"967b1862226b60788d13e9b3f42761922241bcc8a1be600ac16184d7e100c776"} Feb 02 13:21:57 crc kubenswrapper[4955]: W0202 13:21:57.196797 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fbbb588_ff7e_4242_a370_8943bf57604e.slice/crio-e889b1fc1f45227418788d4e97e40747b9957a715a2a9bfc5458158d96bf47c8 WatchSource:0}: Error finding container e889b1fc1f45227418788d4e97e40747b9957a715a2a9bfc5458158d96bf47c8: Status 404 returned error can't find the container with id e889b1fc1f45227418788d4e97e40747b9957a715a2a9bfc5458158d96bf47c8 Feb 02 13:21:57 crc kubenswrapper[4955]: I0202 13:21:57.198642 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:21:57 crc kubenswrapper[4955]: I0202 13:21:57.199007 4955 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 13:21:57 crc kubenswrapper[4955]: I0202 13:21:57.546664 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:57 crc kubenswrapper[4955]: I0202 13:21:57.565587 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:57 crc kubenswrapper[4955]: I0202 13:21:57.743223 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="518d5b92-e7c7-49b2-947f-caed5758afcd" path="/var/lib/kubelet/pods/518d5b92-e7c7-49b2-947f-caed5758afcd/volumes" Feb 02 13:21:57 crc kubenswrapper[4955]: I0202 13:21:57.743918 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="814695e5-ccd0-42ab-b6f8-a924bdfc330d" path="/var/lib/kubelet/pods/814695e5-ccd0-42ab-b6f8-a924bdfc330d/volumes" Feb 02 13:21:57 crc kubenswrapper[4955]: I0202 13:21:57.956780 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1347061d-7385-40c9-9fa4-902dae558fac","Type":"ContainerStarted","Data":"086a8f7271380b32da0ea5110b991344bd1237f2fed6be05f754a052a7585c63"} Feb 02 13:21:57 crc kubenswrapper[4955]: I0202 13:21:57.957067 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1347061d-7385-40c9-9fa4-902dae558fac","Type":"ContainerStarted","Data":"5760aca42908afd772dda28698a19d66081acd2a42cf2fa2ee9a15144f996930"} Feb 02 13:21:57 crc kubenswrapper[4955]: I0202 13:21:57.959412 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fbbb588-ff7e-4242-a370-8943bf57604e","Type":"ContainerStarted","Data":"0181902b3485b3c5491a606bd14176bc8a9d5129a8748ad0802c58ff53b3c9b4"} Feb 02 13:21:57 crc kubenswrapper[4955]: I0202 13:21:57.959437 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fbbb588-ff7e-4242-a370-8943bf57604e","Type":"ContainerStarted","Data":"e889b1fc1f45227418788d4e97e40747b9957a715a2a9bfc5458158d96bf47c8"} Feb 02 13:21:57 crc kubenswrapper[4955]: I0202 13:21:57.975638 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:21:57 crc kubenswrapper[4955]: I0202 13:21:57.981488 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.981462911 podStartE2EDuration="2.981462911s" podCreationTimestamp="2026-02-02 13:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:21:57.972744738 +0000 UTC m=+1168.885081218" watchObservedRunningTime="2026-02-02 13:21:57.981462911 +0000 UTC m=+1168.893799361" Feb 02 13:21:58 crc kubenswrapper[4955]: I0202 13:21:58.143618 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-5k49j"] Feb 02 13:21:58 crc kubenswrapper[4955]: I0202 13:21:58.144869 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5k49j" Feb 02 13:21:58 crc kubenswrapper[4955]: I0202 13:21:58.146946 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 02 13:21:58 crc kubenswrapper[4955]: I0202 13:21:58.147337 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 02 13:21:58 crc kubenswrapper[4955]: I0202 13:21:58.154849 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5k49j"] Feb 02 13:21:58 crc kubenswrapper[4955]: I0202 13:21:58.213759 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d6fd940-a5a3-444e-8401-51a972389d19-scripts\") pod \"nova-cell1-cell-mapping-5k49j\" (UID: \"2d6fd940-a5a3-444e-8401-51a972389d19\") " pod="openstack/nova-cell1-cell-mapping-5k49j" Feb 02 13:21:58 crc kubenswrapper[4955]: I0202 13:21:58.213914 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d6fd940-a5a3-444e-8401-51a972389d19-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5k49j\" (UID: \"2d6fd940-a5a3-444e-8401-51a972389d19\") " pod="openstack/nova-cell1-cell-mapping-5k49j" Feb 02 13:21:58 crc kubenswrapper[4955]: I0202 13:21:58.214011 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d6fd940-a5a3-444e-8401-51a972389d19-config-data\") pod \"nova-cell1-cell-mapping-5k49j\" (UID: \"2d6fd940-a5a3-444e-8401-51a972389d19\") " pod="openstack/nova-cell1-cell-mapping-5k49j" Feb 02 13:21:58 crc kubenswrapper[4955]: I0202 13:21:58.214027 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd27m\" (UniqueName: \"kubernetes.io/projected/2d6fd940-a5a3-444e-8401-51a972389d19-kube-api-access-bd27m\") pod \"nova-cell1-cell-mapping-5k49j\" (UID: \"2d6fd940-a5a3-444e-8401-51a972389d19\") " pod="openstack/nova-cell1-cell-mapping-5k49j" Feb 02 13:21:58 crc kubenswrapper[4955]: I0202 13:21:58.316006 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d6fd940-a5a3-444e-8401-51a972389d19-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5k49j\" (UID: \"2d6fd940-a5a3-444e-8401-51a972389d19\") " pod="openstack/nova-cell1-cell-mapping-5k49j" Feb 02 13:21:58 crc kubenswrapper[4955]: I0202 13:21:58.316167 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d6fd940-a5a3-444e-8401-51a972389d19-config-data\") pod \"nova-cell1-cell-mapping-5k49j\" (UID: \"2d6fd940-a5a3-444e-8401-51a972389d19\") " pod="openstack/nova-cell1-cell-mapping-5k49j" Feb 02 13:21:58 crc kubenswrapper[4955]: I0202 13:21:58.316193 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd27m\" (UniqueName: \"kubernetes.io/projected/2d6fd940-a5a3-444e-8401-51a972389d19-kube-api-access-bd27m\") pod \"nova-cell1-cell-mapping-5k49j\" (UID: \"2d6fd940-a5a3-444e-8401-51a972389d19\") " pod="openstack/nova-cell1-cell-mapping-5k49j" Feb 02 13:21:58 crc kubenswrapper[4955]: I0202 13:21:58.316248 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d6fd940-a5a3-444e-8401-51a972389d19-scripts\") pod \"nova-cell1-cell-mapping-5k49j\" (UID: \"2d6fd940-a5a3-444e-8401-51a972389d19\") " pod="openstack/nova-cell1-cell-mapping-5k49j" Feb 02 13:21:58 crc kubenswrapper[4955]: I0202 13:21:58.321642 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d6fd940-a5a3-444e-8401-51a972389d19-scripts\") pod \"nova-cell1-cell-mapping-5k49j\" (UID: \"2d6fd940-a5a3-444e-8401-51a972389d19\") " pod="openstack/nova-cell1-cell-mapping-5k49j" Feb 02 13:21:58 crc kubenswrapper[4955]: I0202 13:21:58.326290 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d6fd940-a5a3-444e-8401-51a972389d19-config-data\") pod \"nova-cell1-cell-mapping-5k49j\" (UID: \"2d6fd940-a5a3-444e-8401-51a972389d19\") " pod="openstack/nova-cell1-cell-mapping-5k49j" Feb 02 13:21:58 crc kubenswrapper[4955]: I0202 13:21:58.327481 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d6fd940-a5a3-444e-8401-51a972389d19-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5k49j\" (UID: \"2d6fd940-a5a3-444e-8401-51a972389d19\") " pod="openstack/nova-cell1-cell-mapping-5k49j" Feb 02 13:21:58 crc kubenswrapper[4955]: I0202 13:21:58.334482 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd27m\" (UniqueName: \"kubernetes.io/projected/2d6fd940-a5a3-444e-8401-51a972389d19-kube-api-access-bd27m\") pod \"nova-cell1-cell-mapping-5k49j\" (UID: \"2d6fd940-a5a3-444e-8401-51a972389d19\") " pod="openstack/nova-cell1-cell-mapping-5k49j" Feb 02 13:21:58 crc kubenswrapper[4955]: I0202 13:21:58.462123 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5k49j" Feb 02 13:21:58 crc kubenswrapper[4955]: I0202 13:21:58.919868 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5k49j"] Feb 02 13:21:58 crc kubenswrapper[4955]: I0202 13:21:58.967649 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5k49j" event={"ID":"2d6fd940-a5a3-444e-8401-51a972389d19","Type":"ContainerStarted","Data":"620b3e87875f2e510148ddc14fd0b75d9428dde5af17d5c2e0dabcb096554a4a"} Feb 02 13:21:58 crc kubenswrapper[4955]: I0202 13:21:58.970230 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fbbb588-ff7e-4242-a370-8943bf57604e","Type":"ContainerStarted","Data":"1d4c58c42f980f4e92be9d1f4fc352da4bab94a6acb06faf4cb5ac16048f1306"} Feb 02 13:21:59 crc kubenswrapper[4955]: I0202 13:21:59.550808 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:21:59 crc kubenswrapper[4955]: I0202 13:21:59.609456 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-fbzdk"] Feb 02 13:21:59 crc kubenswrapper[4955]: I0202 13:21:59.609714 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" podUID="97dd00fc-d7ac-4e8a-a7e0-920524b0fd21" containerName="dnsmasq-dns" containerID="cri-o://7f68da286d6d5de227f05cbf1a51162390846d3ed4f28864581f4a6bd2b3dea1" gracePeriod=10 Feb 02 13:21:59 crc kubenswrapper[4955]: I0202 13:21:59.898715 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" podUID="97dd00fc-d7ac-4e8a-a7e0-920524b0fd21" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.200:5353: connect: connection refused" Feb 02 13:21:59 crc kubenswrapper[4955]: I0202 13:21:59.981501 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5k49j" event={"ID":"2d6fd940-a5a3-444e-8401-51a972389d19","Type":"ContainerStarted","Data":"a2aa9c2192728b816dad541badfc5174cb614eddd0e4bf91c9c548ea7db7a2b3"} Feb 02 13:21:59 crc kubenswrapper[4955]: I0202 13:21:59.985244 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fbbb588-ff7e-4242-a370-8943bf57604e","Type":"ContainerStarted","Data":"4e3aa11b4aef9eb32dbe5acbd6ce57989666831a7c091e10b4b244634e19563b"} Feb 02 13:21:59 crc kubenswrapper[4955]: I0202 13:21:59.989909 4955 generic.go:334] "Generic (PLEG): container finished" podID="97dd00fc-d7ac-4e8a-a7e0-920524b0fd21" containerID="7f68da286d6d5de227f05cbf1a51162390846d3ed4f28864581f4a6bd2b3dea1" exitCode=0 Feb 02 13:21:59 crc kubenswrapper[4955]: I0202 13:21:59.989981 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" event={"ID":"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21","Type":"ContainerDied","Data":"7f68da286d6d5de227f05cbf1a51162390846d3ed4f28864581f4a6bd2b3dea1"} Feb 02 13:22:00 crc kubenswrapper[4955]: I0202 13:22:00.613753 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:22:00 crc kubenswrapper[4955]: I0202 13:22:00.643014 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-5k49j" podStartSLOduration=2.642988609 podStartE2EDuration="2.642988609s" podCreationTimestamp="2026-02-02 13:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:22:00.014258547 +0000 UTC m=+1170.926595007" watchObservedRunningTime="2026-02-02 13:22:00.642988609 +0000 UTC m=+1171.555325059" Feb 02 13:22:00 crc kubenswrapper[4955]: I0202 13:22:00.780631 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6g5z\" (UniqueName: \"kubernetes.io/projected/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-kube-api-access-f6g5z\") pod \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " Feb 02 13:22:00 crc kubenswrapper[4955]: I0202 13:22:00.780844 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-dns-svc\") pod \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " Feb 02 13:22:00 crc kubenswrapper[4955]: I0202 13:22:00.780923 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-ovsdbserver-sb\") pod \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " Feb 02 13:22:00 crc kubenswrapper[4955]: I0202 13:22:00.781034 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-dns-swift-storage-0\") pod \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " Feb 02 13:22:00 crc kubenswrapper[4955]: I0202 13:22:00.781179 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-config\") pod \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " Feb 02 13:22:00 crc kubenswrapper[4955]: I0202 13:22:00.781253 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-ovsdbserver-nb\") pod \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\" (UID: \"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21\") " Feb 02 13:22:00 crc kubenswrapper[4955]: I0202 13:22:00.798232 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-kube-api-access-f6g5z" (OuterVolumeSpecName: "kube-api-access-f6g5z") pod "97dd00fc-d7ac-4e8a-a7e0-920524b0fd21" (UID: "97dd00fc-d7ac-4e8a-a7e0-920524b0fd21"). InnerVolumeSpecName "kube-api-access-f6g5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:00 crc kubenswrapper[4955]: I0202 13:22:00.864292 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "97dd00fc-d7ac-4e8a-a7e0-920524b0fd21" (UID: "97dd00fc-d7ac-4e8a-a7e0-920524b0fd21"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:00 crc kubenswrapper[4955]: I0202 13:22:00.865584 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "97dd00fc-d7ac-4e8a-a7e0-920524b0fd21" (UID: "97dd00fc-d7ac-4e8a-a7e0-920524b0fd21"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:00 crc kubenswrapper[4955]: I0202 13:22:00.872712 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "97dd00fc-d7ac-4e8a-a7e0-920524b0fd21" (UID: "97dd00fc-d7ac-4e8a-a7e0-920524b0fd21"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:00 crc kubenswrapper[4955]: I0202 13:22:00.875623 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-config" (OuterVolumeSpecName: "config") pod "97dd00fc-d7ac-4e8a-a7e0-920524b0fd21" (UID: "97dd00fc-d7ac-4e8a-a7e0-920524b0fd21"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:00 crc kubenswrapper[4955]: I0202 13:22:00.883338 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "97dd00fc-d7ac-4e8a-a7e0-920524b0fd21" (UID: "97dd00fc-d7ac-4e8a-a7e0-920524b0fd21"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:00 crc kubenswrapper[4955]: I0202 13:22:00.885764 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:00 crc kubenswrapper[4955]: I0202 13:22:00.885800 4955 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:00 crc kubenswrapper[4955]: I0202 13:22:00.885834 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:00 crc kubenswrapper[4955]: I0202 13:22:00.885845 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:00 crc kubenswrapper[4955]: I0202 13:22:00.885856 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6g5z\" (UniqueName: \"kubernetes.io/projected/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-kube-api-access-f6g5z\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:00 crc kubenswrapper[4955]: I0202 13:22:00.885867 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:00 crc kubenswrapper[4955]: I0202 13:22:00.999139 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" Feb 02 13:22:00 crc kubenswrapper[4955]: I0202 13:22:00.999134 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-fbzdk" event={"ID":"97dd00fc-d7ac-4e8a-a7e0-920524b0fd21","Type":"ContainerDied","Data":"02587679aa51fc370dd81e3937fae60c2fd75cb40287b60f36de373e19ed2c62"} Feb 02 13:22:00 crc kubenswrapper[4955]: I0202 13:22:00.999228 4955 scope.go:117] "RemoveContainer" containerID="7f68da286d6d5de227f05cbf1a51162390846d3ed4f28864581f4a6bd2b3dea1" Feb 02 13:22:01 crc kubenswrapper[4955]: I0202 13:22:01.028526 4955 scope.go:117] "RemoveContainer" containerID="0b58174174e0d71362114d23dd92fc13290e2f76262ab5c988ee158cf69b6c63" Feb 02 13:22:01 crc kubenswrapper[4955]: I0202 13:22:01.031810 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-fbzdk"] Feb 02 13:22:01 crc kubenswrapper[4955]: I0202 13:22:01.038215 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-fbzdk"] Feb 02 13:22:01 crc kubenswrapper[4955]: I0202 13:22:01.734348 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97dd00fc-d7ac-4e8a-a7e0-920524b0fd21" path="/var/lib/kubelet/pods/97dd00fc-d7ac-4e8a-a7e0-920524b0fd21/volumes" Feb 02 13:22:02 crc kubenswrapper[4955]: I0202 13:22:02.008985 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fbbb588-ff7e-4242-a370-8943bf57604e","Type":"ContainerStarted","Data":"448f78f625d17aff597bc0bd65b6d8ee40e3729df11cca52fe03d66645da6957"} Feb 02 13:22:02 crc kubenswrapper[4955]: I0202 13:22:02.010394 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 13:22:02 crc kubenswrapper[4955]: I0202 13:22:02.044067 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.916854132 podStartE2EDuration="6.044038278s" podCreationTimestamp="2026-02-02 13:21:56 +0000 UTC" firstStartedPulling="2026-02-02 13:21:57.198795711 +0000 UTC m=+1168.111132161" lastFinishedPulling="2026-02-02 13:22:01.325979857 +0000 UTC m=+1172.238316307" observedRunningTime="2026-02-02 13:22:02.032070368 +0000 UTC m=+1172.944406828" watchObservedRunningTime="2026-02-02 13:22:02.044038278 +0000 UTC m=+1172.956374738" Feb 02 13:22:04 crc kubenswrapper[4955]: I0202 13:22:04.028876 4955 generic.go:334] "Generic (PLEG): container finished" podID="2d6fd940-a5a3-444e-8401-51a972389d19" containerID="a2aa9c2192728b816dad541badfc5174cb614eddd0e4bf91c9c548ea7db7a2b3" exitCode=0 Feb 02 13:22:04 crc kubenswrapper[4955]: I0202 13:22:04.029066 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5k49j" event={"ID":"2d6fd940-a5a3-444e-8401-51a972389d19","Type":"ContainerDied","Data":"a2aa9c2192728b816dad541badfc5174cb614eddd0e4bf91c9c548ea7db7a2b3"} Feb 02 13:22:05 crc kubenswrapper[4955]: I0202 13:22:05.394227 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5k49j" Feb 02 13:22:05 crc kubenswrapper[4955]: I0202 13:22:05.577490 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d6fd940-a5a3-444e-8401-51a972389d19-combined-ca-bundle\") pod \"2d6fd940-a5a3-444e-8401-51a972389d19\" (UID: \"2d6fd940-a5a3-444e-8401-51a972389d19\") " Feb 02 13:22:05 crc kubenswrapper[4955]: I0202 13:22:05.577586 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d6fd940-a5a3-444e-8401-51a972389d19-scripts\") pod \"2d6fd940-a5a3-444e-8401-51a972389d19\" (UID: \"2d6fd940-a5a3-444e-8401-51a972389d19\") " Feb 02 13:22:05 crc kubenswrapper[4955]: I0202 13:22:05.577680 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd27m\" (UniqueName: \"kubernetes.io/projected/2d6fd940-a5a3-444e-8401-51a972389d19-kube-api-access-bd27m\") pod \"2d6fd940-a5a3-444e-8401-51a972389d19\" (UID: \"2d6fd940-a5a3-444e-8401-51a972389d19\") " Feb 02 13:22:05 crc kubenswrapper[4955]: I0202 13:22:05.577730 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d6fd940-a5a3-444e-8401-51a972389d19-config-data\") pod \"2d6fd940-a5a3-444e-8401-51a972389d19\" (UID: \"2d6fd940-a5a3-444e-8401-51a972389d19\") " Feb 02 13:22:05 crc kubenswrapper[4955]: I0202 13:22:05.585177 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d6fd940-a5a3-444e-8401-51a972389d19-scripts" (OuterVolumeSpecName: "scripts") pod "2d6fd940-a5a3-444e-8401-51a972389d19" (UID: "2d6fd940-a5a3-444e-8401-51a972389d19"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:22:05 crc kubenswrapper[4955]: I0202 13:22:05.585217 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d6fd940-a5a3-444e-8401-51a972389d19-kube-api-access-bd27m" (OuterVolumeSpecName: "kube-api-access-bd27m") pod "2d6fd940-a5a3-444e-8401-51a972389d19" (UID: "2d6fd940-a5a3-444e-8401-51a972389d19"). InnerVolumeSpecName "kube-api-access-bd27m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:05 crc kubenswrapper[4955]: I0202 13:22:05.607432 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d6fd940-a5a3-444e-8401-51a972389d19-config-data" (OuterVolumeSpecName: "config-data") pod "2d6fd940-a5a3-444e-8401-51a972389d19" (UID: "2d6fd940-a5a3-444e-8401-51a972389d19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:22:05 crc kubenswrapper[4955]: I0202 13:22:05.610510 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d6fd940-a5a3-444e-8401-51a972389d19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d6fd940-a5a3-444e-8401-51a972389d19" (UID: "2d6fd940-a5a3-444e-8401-51a972389d19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:22:05 crc kubenswrapper[4955]: I0202 13:22:05.680088 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d6fd940-a5a3-444e-8401-51a972389d19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:05 crc kubenswrapper[4955]: I0202 13:22:05.680138 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d6fd940-a5a3-444e-8401-51a972389d19-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:05 crc kubenswrapper[4955]: I0202 13:22:05.680150 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bd27m\" (UniqueName: \"kubernetes.io/projected/2d6fd940-a5a3-444e-8401-51a972389d19-kube-api-access-bd27m\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:05 crc kubenswrapper[4955]: I0202 13:22:05.680162 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d6fd940-a5a3-444e-8401-51a972389d19-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.059485 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5k49j" event={"ID":"2d6fd940-a5a3-444e-8401-51a972389d19","Type":"ContainerDied","Data":"620b3e87875f2e510148ddc14fd0b75d9428dde5af17d5c2e0dabcb096554a4a"} Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.059784 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="620b3e87875f2e510148ddc14fd0b75d9428dde5af17d5c2e0dabcb096554a4a" Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.059536 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5k49j" Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.242523 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.243024 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1347061d-7385-40c9-9fa4-902dae558fac" containerName="nova-api-log" containerID="cri-o://5760aca42908afd772dda28698a19d66081acd2a42cf2fa2ee9a15144f996930" gracePeriod=30 Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.243073 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1347061d-7385-40c9-9fa4-902dae558fac" containerName="nova-api-api" containerID="cri-o://086a8f7271380b32da0ea5110b991344bd1237f2fed6be05f754a052a7585c63" gracePeriod=30 Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.259764 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.260039 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="644dbcee-4378-48ce-9a1c-e2c7369db99a" containerName="nova-scheduler-scheduler" containerID="cri-o://ede8c4003018ba0c393daa36005390e8ea924cf5e1783dfed132565299139573" gracePeriod=30 Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.325874 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.328355 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b5c3a4ff-d989-4604-9515-619124f0b5f5" containerName="nova-metadata-log" containerID="cri-o://3abdfde4582ba0ca7ebd2d87bb260e0f0d96fc28b042e84206ee7f736681fc53" gracePeriod=30 Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.328499 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b5c3a4ff-d989-4604-9515-619124f0b5f5" containerName="nova-metadata-metadata" containerID="cri-o://1497887e98a04b2392b55e2ff340e703fcb901bfec9e1c7fb297a09ae435f7c9" gracePeriod=30 Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.784321 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.900284 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1347061d-7385-40c9-9fa4-902dae558fac-logs\") pod \"1347061d-7385-40c9-9fa4-902dae558fac\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.900377 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf7cg\" (UniqueName: \"kubernetes.io/projected/1347061d-7385-40c9-9fa4-902dae558fac-kube-api-access-tf7cg\") pod \"1347061d-7385-40c9-9fa4-902dae558fac\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.900396 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-config-data\") pod \"1347061d-7385-40c9-9fa4-902dae558fac\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.900460 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-combined-ca-bundle\") pod \"1347061d-7385-40c9-9fa4-902dae558fac\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.900538 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-internal-tls-certs\") pod \"1347061d-7385-40c9-9fa4-902dae558fac\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.900621 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-public-tls-certs\") pod \"1347061d-7385-40c9-9fa4-902dae558fac\" (UID: \"1347061d-7385-40c9-9fa4-902dae558fac\") " Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.900694 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1347061d-7385-40c9-9fa4-902dae558fac-logs" (OuterVolumeSpecName: "logs") pod "1347061d-7385-40c9-9fa4-902dae558fac" (UID: "1347061d-7385-40c9-9fa4-902dae558fac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.901221 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1347061d-7385-40c9-9fa4-902dae558fac-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.905270 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1347061d-7385-40c9-9fa4-902dae558fac-kube-api-access-tf7cg" (OuterVolumeSpecName: "kube-api-access-tf7cg") pod "1347061d-7385-40c9-9fa4-902dae558fac" (UID: "1347061d-7385-40c9-9fa4-902dae558fac"). InnerVolumeSpecName "kube-api-access-tf7cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.927601 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-config-data" (OuterVolumeSpecName: "config-data") pod "1347061d-7385-40c9-9fa4-902dae558fac" (UID: "1347061d-7385-40c9-9fa4-902dae558fac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.944790 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1347061d-7385-40c9-9fa4-902dae558fac" (UID: "1347061d-7385-40c9-9fa4-902dae558fac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.962032 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1347061d-7385-40c9-9fa4-902dae558fac" (UID: "1347061d-7385-40c9-9fa4-902dae558fac"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:22:06 crc kubenswrapper[4955]: I0202 13:22:06.979913 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1347061d-7385-40c9-9fa4-902dae558fac" (UID: "1347061d-7385-40c9-9fa4-902dae558fac"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.003884 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf7cg\" (UniqueName: \"kubernetes.io/projected/1347061d-7385-40c9-9fa4-902dae558fac-kube-api-access-tf7cg\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.003914 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.003923 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.003932 4955 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.003941 4955 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1347061d-7385-40c9-9fa4-902dae558fac-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.069704 4955 generic.go:334] "Generic (PLEG): container finished" podID="b5c3a4ff-d989-4604-9515-619124f0b5f5" containerID="3abdfde4582ba0ca7ebd2d87bb260e0f0d96fc28b042e84206ee7f736681fc53" exitCode=143 Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.069778 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b5c3a4ff-d989-4604-9515-619124f0b5f5","Type":"ContainerDied","Data":"3abdfde4582ba0ca7ebd2d87bb260e0f0d96fc28b042e84206ee7f736681fc53"} Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.072127 4955 generic.go:334] "Generic (PLEG): container finished" podID="1347061d-7385-40c9-9fa4-902dae558fac" containerID="086a8f7271380b32da0ea5110b991344bd1237f2fed6be05f754a052a7585c63" exitCode=0 Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.072168 4955 generic.go:334] "Generic (PLEG): container finished" podID="1347061d-7385-40c9-9fa4-902dae558fac" containerID="5760aca42908afd772dda28698a19d66081acd2a42cf2fa2ee9a15144f996930" exitCode=143 Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.072189 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.072193 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1347061d-7385-40c9-9fa4-902dae558fac","Type":"ContainerDied","Data":"086a8f7271380b32da0ea5110b991344bd1237f2fed6be05f754a052a7585c63"} Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.072346 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1347061d-7385-40c9-9fa4-902dae558fac","Type":"ContainerDied","Data":"5760aca42908afd772dda28698a19d66081acd2a42cf2fa2ee9a15144f996930"} Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.072361 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1347061d-7385-40c9-9fa4-902dae558fac","Type":"ContainerDied","Data":"967b1862226b60788d13e9b3f42761922241bcc8a1be600ac16184d7e100c776"} Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.072377 4955 scope.go:117] "RemoveContainer" containerID="086a8f7271380b32da0ea5110b991344bd1237f2fed6be05f754a052a7585c63" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.092739 4955 scope.go:117] "RemoveContainer" containerID="5760aca42908afd772dda28698a19d66081acd2a42cf2fa2ee9a15144f996930" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.114207 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.120509 4955 scope.go:117] "RemoveContainer" containerID="086a8f7271380b32da0ea5110b991344bd1237f2fed6be05f754a052a7585c63" Feb 02 13:22:07 crc kubenswrapper[4955]: E0202 13:22:07.125825 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"086a8f7271380b32da0ea5110b991344bd1237f2fed6be05f754a052a7585c63\": container with ID starting with 086a8f7271380b32da0ea5110b991344bd1237f2fed6be05f754a052a7585c63 not found: ID does not exist" containerID="086a8f7271380b32da0ea5110b991344bd1237f2fed6be05f754a052a7585c63" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.125877 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"086a8f7271380b32da0ea5110b991344bd1237f2fed6be05f754a052a7585c63"} err="failed to get container status \"086a8f7271380b32da0ea5110b991344bd1237f2fed6be05f754a052a7585c63\": rpc error: code = NotFound desc = could not find container \"086a8f7271380b32da0ea5110b991344bd1237f2fed6be05f754a052a7585c63\": container with ID starting with 086a8f7271380b32da0ea5110b991344bd1237f2fed6be05f754a052a7585c63 not found: ID does not exist" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.125909 4955 scope.go:117] "RemoveContainer" containerID="5760aca42908afd772dda28698a19d66081acd2a42cf2fa2ee9a15144f996930" Feb 02 13:22:07 crc kubenswrapper[4955]: E0202 13:22:07.126336 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5760aca42908afd772dda28698a19d66081acd2a42cf2fa2ee9a15144f996930\": container with ID starting with 5760aca42908afd772dda28698a19d66081acd2a42cf2fa2ee9a15144f996930 not found: ID does not exist" containerID="5760aca42908afd772dda28698a19d66081acd2a42cf2fa2ee9a15144f996930" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.126376 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5760aca42908afd772dda28698a19d66081acd2a42cf2fa2ee9a15144f996930"} err="failed to get container status \"5760aca42908afd772dda28698a19d66081acd2a42cf2fa2ee9a15144f996930\": rpc error: code = NotFound desc = could not find container \"5760aca42908afd772dda28698a19d66081acd2a42cf2fa2ee9a15144f996930\": container with ID starting with 5760aca42908afd772dda28698a19d66081acd2a42cf2fa2ee9a15144f996930 not found: ID does not exist" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.126401 4955 scope.go:117] "RemoveContainer" containerID="086a8f7271380b32da0ea5110b991344bd1237f2fed6be05f754a052a7585c63" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.126618 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"086a8f7271380b32da0ea5110b991344bd1237f2fed6be05f754a052a7585c63"} err="failed to get container status \"086a8f7271380b32da0ea5110b991344bd1237f2fed6be05f754a052a7585c63\": rpc error: code = NotFound desc = could not find container \"086a8f7271380b32da0ea5110b991344bd1237f2fed6be05f754a052a7585c63\": container with ID starting with 086a8f7271380b32da0ea5110b991344bd1237f2fed6be05f754a052a7585c63 not found: ID does not exist" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.126639 4955 scope.go:117] "RemoveContainer" containerID="5760aca42908afd772dda28698a19d66081acd2a42cf2fa2ee9a15144f996930" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.126791 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5760aca42908afd772dda28698a19d66081acd2a42cf2fa2ee9a15144f996930"} err="failed to get container status \"5760aca42908afd772dda28698a19d66081acd2a42cf2fa2ee9a15144f996930\": rpc error: code = NotFound desc = could not find container \"5760aca42908afd772dda28698a19d66081acd2a42cf2fa2ee9a15144f996930\": container with ID starting with 5760aca42908afd772dda28698a19d66081acd2a42cf2fa2ee9a15144f996930 not found: ID does not exist" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.131660 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.143794 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 13:22:07 crc kubenswrapper[4955]: E0202 13:22:07.144276 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1347061d-7385-40c9-9fa4-902dae558fac" containerName="nova-api-api" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.144295 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="1347061d-7385-40c9-9fa4-902dae558fac" containerName="nova-api-api" Feb 02 13:22:07 crc kubenswrapper[4955]: E0202 13:22:07.144317 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97dd00fc-d7ac-4e8a-a7e0-920524b0fd21" containerName="dnsmasq-dns" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.144323 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="97dd00fc-d7ac-4e8a-a7e0-920524b0fd21" containerName="dnsmasq-dns" Feb 02 13:22:07 crc kubenswrapper[4955]: E0202 13:22:07.144339 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d6fd940-a5a3-444e-8401-51a972389d19" containerName="nova-manage" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.144345 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d6fd940-a5a3-444e-8401-51a972389d19" containerName="nova-manage" Feb 02 13:22:07 crc kubenswrapper[4955]: E0202 13:22:07.144364 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97dd00fc-d7ac-4e8a-a7e0-920524b0fd21" containerName="init" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.144370 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="97dd00fc-d7ac-4e8a-a7e0-920524b0fd21" containerName="init" Feb 02 13:22:07 crc kubenswrapper[4955]: E0202 13:22:07.144383 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1347061d-7385-40c9-9fa4-902dae558fac" containerName="nova-api-log" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.144481 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="1347061d-7385-40c9-9fa4-902dae558fac" containerName="nova-api-log" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.144707 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d6fd940-a5a3-444e-8401-51a972389d19" containerName="nova-manage" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.144724 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="1347061d-7385-40c9-9fa4-902dae558fac" containerName="nova-api-log" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.144738 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="1347061d-7385-40c9-9fa4-902dae558fac" containerName="nova-api-api" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.144755 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="97dd00fc-d7ac-4e8a-a7e0-920524b0fd21" containerName="dnsmasq-dns" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.146004 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.149930 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.150729 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.150864 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.176508 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.308645 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9056a98-ddc3-4c1b-8c5d-25a03e6163ce-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f9056a98-ddc3-4c1b-8c5d-25a03e6163ce\") " pod="openstack/nova-api-0" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.308832 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9056a98-ddc3-4c1b-8c5d-25a03e6163ce-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9056a98-ddc3-4c1b-8c5d-25a03e6163ce\") " pod="openstack/nova-api-0" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.308905 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9056a98-ddc3-4c1b-8c5d-25a03e6163ce-config-data\") pod \"nova-api-0\" (UID: \"f9056a98-ddc3-4c1b-8c5d-25a03e6163ce\") " pod="openstack/nova-api-0" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.308942 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9056a98-ddc3-4c1b-8c5d-25a03e6163ce-logs\") pod \"nova-api-0\" (UID: \"f9056a98-ddc3-4c1b-8c5d-25a03e6163ce\") " pod="openstack/nova-api-0" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.308961 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9056a98-ddc3-4c1b-8c5d-25a03e6163ce-public-tls-certs\") pod \"nova-api-0\" (UID: \"f9056a98-ddc3-4c1b-8c5d-25a03e6163ce\") " pod="openstack/nova-api-0" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.309037 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lzkd\" (UniqueName: \"kubernetes.io/projected/f9056a98-ddc3-4c1b-8c5d-25a03e6163ce-kube-api-access-4lzkd\") pod \"nova-api-0\" (UID: \"f9056a98-ddc3-4c1b-8c5d-25a03e6163ce\") " pod="openstack/nova-api-0" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.411701 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9056a98-ddc3-4c1b-8c5d-25a03e6163ce-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9056a98-ddc3-4c1b-8c5d-25a03e6163ce\") " pod="openstack/nova-api-0" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.411792 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9056a98-ddc3-4c1b-8c5d-25a03e6163ce-config-data\") pod \"nova-api-0\" (UID: \"f9056a98-ddc3-4c1b-8c5d-25a03e6163ce\") " pod="openstack/nova-api-0" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.411840 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9056a98-ddc3-4c1b-8c5d-25a03e6163ce-logs\") pod \"nova-api-0\" (UID: \"f9056a98-ddc3-4c1b-8c5d-25a03e6163ce\") " pod="openstack/nova-api-0" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.411863 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9056a98-ddc3-4c1b-8c5d-25a03e6163ce-public-tls-certs\") pod \"nova-api-0\" (UID: \"f9056a98-ddc3-4c1b-8c5d-25a03e6163ce\") " pod="openstack/nova-api-0" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.411885 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lzkd\" (UniqueName: \"kubernetes.io/projected/f9056a98-ddc3-4c1b-8c5d-25a03e6163ce-kube-api-access-4lzkd\") pod \"nova-api-0\" (UID: \"f9056a98-ddc3-4c1b-8c5d-25a03e6163ce\") " pod="openstack/nova-api-0" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.411911 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9056a98-ddc3-4c1b-8c5d-25a03e6163ce-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f9056a98-ddc3-4c1b-8c5d-25a03e6163ce\") " pod="openstack/nova-api-0" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.412809 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9056a98-ddc3-4c1b-8c5d-25a03e6163ce-logs\") pod \"nova-api-0\" (UID: \"f9056a98-ddc3-4c1b-8c5d-25a03e6163ce\") " pod="openstack/nova-api-0" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.416852 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9056a98-ddc3-4c1b-8c5d-25a03e6163ce-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f9056a98-ddc3-4c1b-8c5d-25a03e6163ce\") " pod="openstack/nova-api-0" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.416852 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9056a98-ddc3-4c1b-8c5d-25a03e6163ce-public-tls-certs\") pod \"nova-api-0\" (UID: \"f9056a98-ddc3-4c1b-8c5d-25a03e6163ce\") " pod="openstack/nova-api-0" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.417386 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9056a98-ddc3-4c1b-8c5d-25a03e6163ce-config-data\") pod \"nova-api-0\" (UID: \"f9056a98-ddc3-4c1b-8c5d-25a03e6163ce\") " pod="openstack/nova-api-0" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.417882 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9056a98-ddc3-4c1b-8c5d-25a03e6163ce-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9056a98-ddc3-4c1b-8c5d-25a03e6163ce\") " pod="openstack/nova-api-0" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.434054 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lzkd\" (UniqueName: \"kubernetes.io/projected/f9056a98-ddc3-4c1b-8c5d-25a03e6163ce-kube-api-access-4lzkd\") pod \"nova-api-0\" (UID: \"f9056a98-ddc3-4c1b-8c5d-25a03e6163ce\") " pod="openstack/nova-api-0" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.463195 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:22:07 crc kubenswrapper[4955]: I0202 13:22:07.733875 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1347061d-7385-40c9-9fa4-902dae558fac" path="/var/lib/kubelet/pods/1347061d-7385-40c9-9fa4-902dae558fac/volumes" Feb 02 13:22:08 crc kubenswrapper[4955]: I0202 13:22:08.090398 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:22:09 crc kubenswrapper[4955]: I0202 13:22:09.093545 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9056a98-ddc3-4c1b-8c5d-25a03e6163ce","Type":"ContainerStarted","Data":"4aa469455126afea60fbeafd491bb50d72330c200f5bd44202d2cdb2f28c378d"} Feb 02 13:22:09 crc kubenswrapper[4955]: I0202 13:22:09.094131 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9056a98-ddc3-4c1b-8c5d-25a03e6163ce","Type":"ContainerStarted","Data":"a799cdd0a9100c1779bab9fa12c355aa4e493342379a998d61ed71498d8955da"} Feb 02 13:22:09 crc kubenswrapper[4955]: I0202 13:22:09.094150 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9056a98-ddc3-4c1b-8c5d-25a03e6163ce","Type":"ContainerStarted","Data":"973da76662cfb4de24ec62344beac7641bb4ee6c13b312e49bf99abf9616cc20"} Feb 02 13:22:09 crc kubenswrapper[4955]: I0202 13:22:09.120353 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.120326734 podStartE2EDuration="2.120326734s" podCreationTimestamp="2026-02-02 13:22:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:22:09.110618328 +0000 UTC m=+1180.022954788" watchObservedRunningTime="2026-02-02 13:22:09.120326734 +0000 UTC m=+1180.032663184" Feb 02 13:22:09 crc kubenswrapper[4955]: I0202 13:22:09.666038 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 13:22:09 crc kubenswrapper[4955]: I0202 13:22:09.761756 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmwgq\" (UniqueName: \"kubernetes.io/projected/644dbcee-4378-48ce-9a1c-e2c7369db99a-kube-api-access-kmwgq\") pod \"644dbcee-4378-48ce-9a1c-e2c7369db99a\" (UID: \"644dbcee-4378-48ce-9a1c-e2c7369db99a\") " Feb 02 13:22:09 crc kubenswrapper[4955]: I0202 13:22:09.761917 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644dbcee-4378-48ce-9a1c-e2c7369db99a-combined-ca-bundle\") pod \"644dbcee-4378-48ce-9a1c-e2c7369db99a\" (UID: \"644dbcee-4378-48ce-9a1c-e2c7369db99a\") " Feb 02 13:22:09 crc kubenswrapper[4955]: I0202 13:22:09.769887 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644dbcee-4378-48ce-9a1c-e2c7369db99a-kube-api-access-kmwgq" (OuterVolumeSpecName: "kube-api-access-kmwgq") pod "644dbcee-4378-48ce-9a1c-e2c7369db99a" (UID: "644dbcee-4378-48ce-9a1c-e2c7369db99a"). InnerVolumeSpecName "kube-api-access-kmwgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:09 crc kubenswrapper[4955]: I0202 13:22:09.802287 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644dbcee-4378-48ce-9a1c-e2c7369db99a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "644dbcee-4378-48ce-9a1c-e2c7369db99a" (UID: "644dbcee-4378-48ce-9a1c-e2c7369db99a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:22:09 crc kubenswrapper[4955]: I0202 13:22:09.863540 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644dbcee-4378-48ce-9a1c-e2c7369db99a-config-data\") pod \"644dbcee-4378-48ce-9a1c-e2c7369db99a\" (UID: \"644dbcee-4378-48ce-9a1c-e2c7369db99a\") " Feb 02 13:22:09 crc kubenswrapper[4955]: I0202 13:22:09.865729 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmwgq\" (UniqueName: \"kubernetes.io/projected/644dbcee-4378-48ce-9a1c-e2c7369db99a-kube-api-access-kmwgq\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:09 crc kubenswrapper[4955]: I0202 13:22:09.865757 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644dbcee-4378-48ce-9a1c-e2c7369db99a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:09 crc kubenswrapper[4955]: I0202 13:22:09.874986 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:22:09 crc kubenswrapper[4955]: I0202 13:22:09.896371 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644dbcee-4378-48ce-9a1c-e2c7369db99a-config-data" (OuterVolumeSpecName: "config-data") pod "644dbcee-4378-48ce-9a1c-e2c7369db99a" (UID: "644dbcee-4378-48ce-9a1c-e2c7369db99a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:22:09 crc kubenswrapper[4955]: I0202 13:22:09.969851 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644dbcee-4378-48ce-9a1c-e2c7369db99a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.071043 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5c3a4ff-d989-4604-9515-619124f0b5f5-config-data\") pod \"b5c3a4ff-d989-4604-9515-619124f0b5f5\" (UID: \"b5c3a4ff-d989-4604-9515-619124f0b5f5\") " Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.071120 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5c3a4ff-d989-4604-9515-619124f0b5f5-logs\") pod \"b5c3a4ff-d989-4604-9515-619124f0b5f5\" (UID: \"b5c3a4ff-d989-4604-9515-619124f0b5f5\") " Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.071233 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-272qh\" (UniqueName: \"kubernetes.io/projected/b5c3a4ff-d989-4604-9515-619124f0b5f5-kube-api-access-272qh\") pod \"b5c3a4ff-d989-4604-9515-619124f0b5f5\" (UID: \"b5c3a4ff-d989-4604-9515-619124f0b5f5\") " Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.071258 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5c3a4ff-d989-4604-9515-619124f0b5f5-combined-ca-bundle\") pod \"b5c3a4ff-d989-4604-9515-619124f0b5f5\" (UID: \"b5c3a4ff-d989-4604-9515-619124f0b5f5\") " Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.071278 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5c3a4ff-d989-4604-9515-619124f0b5f5-nova-metadata-tls-certs\") pod \"b5c3a4ff-d989-4604-9515-619124f0b5f5\" (UID: \"b5c3a4ff-d989-4604-9515-619124f0b5f5\") " Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.071843 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5c3a4ff-d989-4604-9515-619124f0b5f5-logs" (OuterVolumeSpecName: "logs") pod "b5c3a4ff-d989-4604-9515-619124f0b5f5" (UID: "b5c3a4ff-d989-4604-9515-619124f0b5f5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.074999 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5c3a4ff-d989-4604-9515-619124f0b5f5-kube-api-access-272qh" (OuterVolumeSpecName: "kube-api-access-272qh") pod "b5c3a4ff-d989-4604-9515-619124f0b5f5" (UID: "b5c3a4ff-d989-4604-9515-619124f0b5f5"). InnerVolumeSpecName "kube-api-access-272qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.094389 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5c3a4ff-d989-4604-9515-619124f0b5f5-config-data" (OuterVolumeSpecName: "config-data") pod "b5c3a4ff-d989-4604-9515-619124f0b5f5" (UID: "b5c3a4ff-d989-4604-9515-619124f0b5f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.095742 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5c3a4ff-d989-4604-9515-619124f0b5f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5c3a4ff-d989-4604-9515-619124f0b5f5" (UID: "b5c3a4ff-d989-4604-9515-619124f0b5f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.104520 4955 generic.go:334] "Generic (PLEG): container finished" podID="b5c3a4ff-d989-4604-9515-619124f0b5f5" containerID="1497887e98a04b2392b55e2ff340e703fcb901bfec9e1c7fb297a09ae435f7c9" exitCode=0 Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.104611 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b5c3a4ff-d989-4604-9515-619124f0b5f5","Type":"ContainerDied","Data":"1497887e98a04b2392b55e2ff340e703fcb901bfec9e1c7fb297a09ae435f7c9"} Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.104649 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b5c3a4ff-d989-4604-9515-619124f0b5f5","Type":"ContainerDied","Data":"ef601f5228e9487474ad6d87df3ac8fdda1c7c3e2a5bbd12066161c1af5818a7"} Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.104661 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.104669 4955 scope.go:117] "RemoveContainer" containerID="1497887e98a04b2392b55e2ff340e703fcb901bfec9e1c7fb297a09ae435f7c9" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.106515 4955 generic.go:334] "Generic (PLEG): container finished" podID="644dbcee-4378-48ce-9a1c-e2c7369db99a" containerID="ede8c4003018ba0c393daa36005390e8ea924cf5e1783dfed132565299139573" exitCode=0 Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.106579 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.106589 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"644dbcee-4378-48ce-9a1c-e2c7369db99a","Type":"ContainerDied","Data":"ede8c4003018ba0c393daa36005390e8ea924cf5e1783dfed132565299139573"} Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.106615 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"644dbcee-4378-48ce-9a1c-e2c7369db99a","Type":"ContainerDied","Data":"c7eebe91c0879d1d844e77b72deb030316086bafdac9ce392432631414cc2b72"} Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.145946 4955 scope.go:117] "RemoveContainer" containerID="3abdfde4582ba0ca7ebd2d87bb260e0f0d96fc28b042e84206ee7f736681fc53" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.160487 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.165258 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5c3a4ff-d989-4604-9515-619124f0b5f5-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b5c3a4ff-d989-4604-9515-619124f0b5f5" (UID: "b5c3a4ff-d989-4604-9515-619124f0b5f5"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.173695 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-272qh\" (UniqueName: \"kubernetes.io/projected/b5c3a4ff-d989-4604-9515-619124f0b5f5-kube-api-access-272qh\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.173741 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5c3a4ff-d989-4604-9515-619124f0b5f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.173753 4955 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5c3a4ff-d989-4604-9515-619124f0b5f5-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.173766 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5c3a4ff-d989-4604-9515-619124f0b5f5-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.173778 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5c3a4ff-d989-4604-9515-619124f0b5f5-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.175207 4955 scope.go:117] "RemoveContainer" containerID="1497887e98a04b2392b55e2ff340e703fcb901bfec9e1c7fb297a09ae435f7c9" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.177165 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:22:10 crc kubenswrapper[4955]: E0202 13:22:10.177927 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1497887e98a04b2392b55e2ff340e703fcb901bfec9e1c7fb297a09ae435f7c9\": container with ID starting with 1497887e98a04b2392b55e2ff340e703fcb901bfec9e1c7fb297a09ae435f7c9 not found: ID does not exist" containerID="1497887e98a04b2392b55e2ff340e703fcb901bfec9e1c7fb297a09ae435f7c9" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.177978 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1497887e98a04b2392b55e2ff340e703fcb901bfec9e1c7fb297a09ae435f7c9"} err="failed to get container status \"1497887e98a04b2392b55e2ff340e703fcb901bfec9e1c7fb297a09ae435f7c9\": rpc error: code = NotFound desc = could not find container \"1497887e98a04b2392b55e2ff340e703fcb901bfec9e1c7fb297a09ae435f7c9\": container with ID starting with 1497887e98a04b2392b55e2ff340e703fcb901bfec9e1c7fb297a09ae435f7c9 not found: ID does not exist" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.178010 4955 scope.go:117] "RemoveContainer" containerID="3abdfde4582ba0ca7ebd2d87bb260e0f0d96fc28b042e84206ee7f736681fc53" Feb 02 13:22:10 crc kubenswrapper[4955]: E0202 13:22:10.178512 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3abdfde4582ba0ca7ebd2d87bb260e0f0d96fc28b042e84206ee7f736681fc53\": container with ID starting with 3abdfde4582ba0ca7ebd2d87bb260e0f0d96fc28b042e84206ee7f736681fc53 not found: ID does not exist" containerID="3abdfde4582ba0ca7ebd2d87bb260e0f0d96fc28b042e84206ee7f736681fc53" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.178541 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3abdfde4582ba0ca7ebd2d87bb260e0f0d96fc28b042e84206ee7f736681fc53"} err="failed to get container status \"3abdfde4582ba0ca7ebd2d87bb260e0f0d96fc28b042e84206ee7f736681fc53\": rpc error: code = NotFound desc = could not find container \"3abdfde4582ba0ca7ebd2d87bb260e0f0d96fc28b042e84206ee7f736681fc53\": container with ID starting with 3abdfde4582ba0ca7ebd2d87bb260e0f0d96fc28b042e84206ee7f736681fc53 not found: ID does not exist" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.178575 4955 scope.go:117] "RemoveContainer" containerID="ede8c4003018ba0c393daa36005390e8ea924cf5e1783dfed132565299139573" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.199493 4955 scope.go:117] "RemoveContainer" containerID="ede8c4003018ba0c393daa36005390e8ea924cf5e1783dfed132565299139573" Feb 02 13:22:10 crc kubenswrapper[4955]: E0202 13:22:10.200044 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ede8c4003018ba0c393daa36005390e8ea924cf5e1783dfed132565299139573\": container with ID starting with ede8c4003018ba0c393daa36005390e8ea924cf5e1783dfed132565299139573 not found: ID does not exist" containerID="ede8c4003018ba0c393daa36005390e8ea924cf5e1783dfed132565299139573" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.200081 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ede8c4003018ba0c393daa36005390e8ea924cf5e1783dfed132565299139573"} err="failed to get container status \"ede8c4003018ba0c393daa36005390e8ea924cf5e1783dfed132565299139573\": rpc error: code = NotFound desc = could not find container \"ede8c4003018ba0c393daa36005390e8ea924cf5e1783dfed132565299139573\": container with ID starting with ede8c4003018ba0c393daa36005390e8ea924cf5e1783dfed132565299139573 not found: ID does not exist" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.211098 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:22:10 crc kubenswrapper[4955]: E0202 13:22:10.211633 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5c3a4ff-d989-4604-9515-619124f0b5f5" containerName="nova-metadata-metadata" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.211651 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5c3a4ff-d989-4604-9515-619124f0b5f5" containerName="nova-metadata-metadata" Feb 02 13:22:10 crc kubenswrapper[4955]: E0202 13:22:10.211672 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5c3a4ff-d989-4604-9515-619124f0b5f5" containerName="nova-metadata-log" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.211678 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5c3a4ff-d989-4604-9515-619124f0b5f5" containerName="nova-metadata-log" Feb 02 13:22:10 crc kubenswrapper[4955]: E0202 13:22:10.211703 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644dbcee-4378-48ce-9a1c-e2c7369db99a" containerName="nova-scheduler-scheduler" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.211709 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="644dbcee-4378-48ce-9a1c-e2c7369db99a" containerName="nova-scheduler-scheduler" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.211870 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5c3a4ff-d989-4604-9515-619124f0b5f5" containerName="nova-metadata-metadata" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.211887 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5c3a4ff-d989-4604-9515-619124f0b5f5" containerName="nova-metadata-log" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.211902 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="644dbcee-4378-48ce-9a1c-e2c7369db99a" containerName="nova-scheduler-scheduler" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.212522 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.217417 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.222006 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.377516 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffcfc78d-bc88-4849-843b-106dbb020bb4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ffcfc78d-bc88-4849-843b-106dbb020bb4\") " pod="openstack/nova-scheduler-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.377748 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqhm5\" (UniqueName: \"kubernetes.io/projected/ffcfc78d-bc88-4849-843b-106dbb020bb4-kube-api-access-fqhm5\") pod \"nova-scheduler-0\" (UID: \"ffcfc78d-bc88-4849-843b-106dbb020bb4\") " pod="openstack/nova-scheduler-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.377839 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffcfc78d-bc88-4849-843b-106dbb020bb4-config-data\") pod \"nova-scheduler-0\" (UID: \"ffcfc78d-bc88-4849-843b-106dbb020bb4\") " pod="openstack/nova-scheduler-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.442819 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.453960 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.478278 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.480885 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.481947 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqhm5\" (UniqueName: \"kubernetes.io/projected/ffcfc78d-bc88-4849-843b-106dbb020bb4-kube-api-access-fqhm5\") pod \"nova-scheduler-0\" (UID: \"ffcfc78d-bc88-4849-843b-106dbb020bb4\") " pod="openstack/nova-scheduler-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.482328 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffcfc78d-bc88-4849-843b-106dbb020bb4-config-data\") pod \"nova-scheduler-0\" (UID: \"ffcfc78d-bc88-4849-843b-106dbb020bb4\") " pod="openstack/nova-scheduler-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.482497 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffcfc78d-bc88-4849-843b-106dbb020bb4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ffcfc78d-bc88-4849-843b-106dbb020bb4\") " pod="openstack/nova-scheduler-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.483824 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.493355 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.497485 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffcfc78d-bc88-4849-843b-106dbb020bb4-config-data\") pod \"nova-scheduler-0\" (UID: \"ffcfc78d-bc88-4849-843b-106dbb020bb4\") " pod="openstack/nova-scheduler-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.497862 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.501591 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqhm5\" (UniqueName: \"kubernetes.io/projected/ffcfc78d-bc88-4849-843b-106dbb020bb4-kube-api-access-fqhm5\") pod \"nova-scheduler-0\" (UID: \"ffcfc78d-bc88-4849-843b-106dbb020bb4\") " pod="openstack/nova-scheduler-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.501826 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffcfc78d-bc88-4849-843b-106dbb020bb4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ffcfc78d-bc88-4849-843b-106dbb020bb4\") " pod="openstack/nova-scheduler-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.535890 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.584637 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52pbg\" (UniqueName: \"kubernetes.io/projected/21d4cabc-0090-40ac-8afb-15ef9def8f7d-kube-api-access-52pbg\") pod \"nova-metadata-0\" (UID: \"21d4cabc-0090-40ac-8afb-15ef9def8f7d\") " pod="openstack/nova-metadata-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.584791 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21d4cabc-0090-40ac-8afb-15ef9def8f7d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"21d4cabc-0090-40ac-8afb-15ef9def8f7d\") " pod="openstack/nova-metadata-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.585546 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21d4cabc-0090-40ac-8afb-15ef9def8f7d-config-data\") pod \"nova-metadata-0\" (UID: \"21d4cabc-0090-40ac-8afb-15ef9def8f7d\") " pod="openstack/nova-metadata-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.585678 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21d4cabc-0090-40ac-8afb-15ef9def8f7d-logs\") pod \"nova-metadata-0\" (UID: \"21d4cabc-0090-40ac-8afb-15ef9def8f7d\") " pod="openstack/nova-metadata-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.586018 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/21d4cabc-0090-40ac-8afb-15ef9def8f7d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"21d4cabc-0090-40ac-8afb-15ef9def8f7d\") " pod="openstack/nova-metadata-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.687758 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21d4cabc-0090-40ac-8afb-15ef9def8f7d-config-data\") pod \"nova-metadata-0\" (UID: \"21d4cabc-0090-40ac-8afb-15ef9def8f7d\") " pod="openstack/nova-metadata-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.688069 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21d4cabc-0090-40ac-8afb-15ef9def8f7d-logs\") pod \"nova-metadata-0\" (UID: \"21d4cabc-0090-40ac-8afb-15ef9def8f7d\") " pod="openstack/nova-metadata-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.688122 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/21d4cabc-0090-40ac-8afb-15ef9def8f7d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"21d4cabc-0090-40ac-8afb-15ef9def8f7d\") " pod="openstack/nova-metadata-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.688157 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52pbg\" (UniqueName: \"kubernetes.io/projected/21d4cabc-0090-40ac-8afb-15ef9def8f7d-kube-api-access-52pbg\") pod \"nova-metadata-0\" (UID: \"21d4cabc-0090-40ac-8afb-15ef9def8f7d\") " pod="openstack/nova-metadata-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.688201 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21d4cabc-0090-40ac-8afb-15ef9def8f7d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"21d4cabc-0090-40ac-8afb-15ef9def8f7d\") " pod="openstack/nova-metadata-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.688687 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21d4cabc-0090-40ac-8afb-15ef9def8f7d-logs\") pod \"nova-metadata-0\" (UID: \"21d4cabc-0090-40ac-8afb-15ef9def8f7d\") " pod="openstack/nova-metadata-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.694938 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21d4cabc-0090-40ac-8afb-15ef9def8f7d-config-data\") pod \"nova-metadata-0\" (UID: \"21d4cabc-0090-40ac-8afb-15ef9def8f7d\") " pod="openstack/nova-metadata-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.697280 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21d4cabc-0090-40ac-8afb-15ef9def8f7d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"21d4cabc-0090-40ac-8afb-15ef9def8f7d\") " pod="openstack/nova-metadata-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.709995 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/21d4cabc-0090-40ac-8afb-15ef9def8f7d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"21d4cabc-0090-40ac-8afb-15ef9def8f7d\") " pod="openstack/nova-metadata-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.712890 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52pbg\" (UniqueName: \"kubernetes.io/projected/21d4cabc-0090-40ac-8afb-15ef9def8f7d-kube-api-access-52pbg\") pod \"nova-metadata-0\" (UID: \"21d4cabc-0090-40ac-8afb-15ef9def8f7d\") " pod="openstack/nova-metadata-0" Feb 02 13:22:10 crc kubenswrapper[4955]: I0202 13:22:10.985190 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:22:11 crc kubenswrapper[4955]: W0202 13:22:11.042095 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffcfc78d_bc88_4849_843b_106dbb020bb4.slice/crio-d2edbdb1ded887b1df37bfc1c40af5ff704daabb453c95d45e13fe151abb30ec WatchSource:0}: Error finding container d2edbdb1ded887b1df37bfc1c40af5ff704daabb453c95d45e13fe151abb30ec: Status 404 returned error can't find the container with id d2edbdb1ded887b1df37bfc1c40af5ff704daabb453c95d45e13fe151abb30ec Feb 02 13:22:11 crc kubenswrapper[4955]: I0202 13:22:11.045309 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:22:11 crc kubenswrapper[4955]: I0202 13:22:11.126316 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ffcfc78d-bc88-4849-843b-106dbb020bb4","Type":"ContainerStarted","Data":"d2edbdb1ded887b1df37bfc1c40af5ff704daabb453c95d45e13fe151abb30ec"} Feb 02 13:22:11 crc kubenswrapper[4955]: I0202 13:22:11.417742 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:22:11 crc kubenswrapper[4955]: W0202 13:22:11.421267 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21d4cabc_0090_40ac_8afb_15ef9def8f7d.slice/crio-abe18ab1164dbdee98725117dd57818d1fc61d00af1dd18b4ef720acfbb4b5f6 WatchSource:0}: Error finding container abe18ab1164dbdee98725117dd57818d1fc61d00af1dd18b4ef720acfbb4b5f6: Status 404 returned error can't find the container with id abe18ab1164dbdee98725117dd57818d1fc61d00af1dd18b4ef720acfbb4b5f6 Feb 02 13:22:11 crc kubenswrapper[4955]: I0202 13:22:11.737040 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="644dbcee-4378-48ce-9a1c-e2c7369db99a" path="/var/lib/kubelet/pods/644dbcee-4378-48ce-9a1c-e2c7369db99a/volumes" Feb 02 13:22:11 crc kubenswrapper[4955]: I0202 13:22:11.738175 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5c3a4ff-d989-4604-9515-619124f0b5f5" path="/var/lib/kubelet/pods/b5c3a4ff-d989-4604-9515-619124f0b5f5/volumes" Feb 02 13:22:12 crc kubenswrapper[4955]: I0202 13:22:12.148170 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"21d4cabc-0090-40ac-8afb-15ef9def8f7d","Type":"ContainerStarted","Data":"9014ce4095d5c40d1b89eb84d170a8acb3fd43eaf9de4f6b6d54d38610eda640"} Feb 02 13:22:12 crc kubenswrapper[4955]: I0202 13:22:12.148219 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"21d4cabc-0090-40ac-8afb-15ef9def8f7d","Type":"ContainerStarted","Data":"3c3e000fc6ccb5a2aa417b363082a7f276e420addda643fd7cf3a4453551eacd"} Feb 02 13:22:12 crc kubenswrapper[4955]: I0202 13:22:12.148233 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"21d4cabc-0090-40ac-8afb-15ef9def8f7d","Type":"ContainerStarted","Data":"abe18ab1164dbdee98725117dd57818d1fc61d00af1dd18b4ef720acfbb4b5f6"} Feb 02 13:22:12 crc kubenswrapper[4955]: I0202 13:22:12.150016 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ffcfc78d-bc88-4849-843b-106dbb020bb4","Type":"ContainerStarted","Data":"073eb7e0695f0ca65ef53a735ec054d3dbe161da1d1f32882bf7b61a736c134c"} Feb 02 13:22:12 crc kubenswrapper[4955]: I0202 13:22:12.172440 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.172422313 podStartE2EDuration="2.172422313s" podCreationTimestamp="2026-02-02 13:22:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:22:12.164108632 +0000 UTC m=+1183.076445092" watchObservedRunningTime="2026-02-02 13:22:12.172422313 +0000 UTC m=+1183.084758763" Feb 02 13:22:12 crc kubenswrapper[4955]: I0202 13:22:12.190029 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.190014081 podStartE2EDuration="2.190014081s" podCreationTimestamp="2026-02-02 13:22:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:22:12.187744905 +0000 UTC m=+1183.100081355" watchObservedRunningTime="2026-02-02 13:22:12.190014081 +0000 UTC m=+1183.102350531" Feb 02 13:22:15 crc kubenswrapper[4955]: I0202 13:22:15.536775 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 13:22:15 crc kubenswrapper[4955]: I0202 13:22:15.986310 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 13:22:15 crc kubenswrapper[4955]: I0202 13:22:15.986671 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 13:22:17 crc kubenswrapper[4955]: I0202 13:22:17.464180 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 13:22:17 crc kubenswrapper[4955]: I0202 13:22:17.464593 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 13:22:18 crc kubenswrapper[4955]: I0202 13:22:18.477759 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f9056a98-ddc3-4c1b-8c5d-25a03e6163ce" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.214:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 13:22:18 crc kubenswrapper[4955]: I0202 13:22:18.477784 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f9056a98-ddc3-4c1b-8c5d-25a03e6163ce" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.214:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 13:22:20 crc kubenswrapper[4955]: I0202 13:22:20.537264 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 13:22:20 crc kubenswrapper[4955]: I0202 13:22:20.562211 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 13:22:20 crc kubenswrapper[4955]: I0202 13:22:20.985497 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 13:22:20 crc kubenswrapper[4955]: I0202 13:22:20.985837 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 13:22:21 crc kubenswrapper[4955]: I0202 13:22:21.271726 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 13:22:22 crc kubenswrapper[4955]: I0202 13:22:22.036010 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="21d4cabc-0090-40ac-8afb-15ef9def8f7d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 13:22:22 crc kubenswrapper[4955]: I0202 13:22:22.036085 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="21d4cabc-0090-40ac-8afb-15ef9def8f7d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 13:22:26 crc kubenswrapper[4955]: I0202 13:22:26.765426 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 13:22:27 crc kubenswrapper[4955]: I0202 13:22:27.473053 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 13:22:27 crc kubenswrapper[4955]: I0202 13:22:27.473611 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 13:22:27 crc kubenswrapper[4955]: I0202 13:22:27.487091 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 13:22:27 crc kubenswrapper[4955]: I0202 13:22:27.488318 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 13:22:28 crc kubenswrapper[4955]: I0202 13:22:28.318163 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 13:22:28 crc kubenswrapper[4955]: I0202 13:22:28.324329 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 13:22:30 crc kubenswrapper[4955]: I0202 13:22:30.991072 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 13:22:30 crc kubenswrapper[4955]: I0202 13:22:30.991665 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 13:22:30 crc kubenswrapper[4955]: I0202 13:22:30.997203 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 13:22:31 crc kubenswrapper[4955]: I0202 13:22:31.359776 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 13:22:39 crc kubenswrapper[4955]: I0202 13:22:39.213978 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 13:22:40 crc kubenswrapper[4955]: I0202 13:22:40.109994 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 13:22:43 crc kubenswrapper[4955]: I0202 13:22:43.471546 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="60f684bd-051c-4608-8c11-1058cd2d6a01" containerName="rabbitmq" containerID="cri-o://50eb1031dbe225b15ca93c57f75d98168064e4aebf2f30baa5878661ad315c73" gracePeriod=604796 Feb 02 13:22:43 crc kubenswrapper[4955]: I0202 13:22:43.945422 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="827225b6-1672-40b1-a9ee-7dd2d5db2d1d" containerName="rabbitmq" containerID="cri-o://b12b8e45ce088705b73de1339228a4a64a431496ca2972801565bfbdc998a238" gracePeriod=604797 Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.079677 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.157274 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/60f684bd-051c-4608-8c11-1058cd2d6a01-erlang-cookie-secret\") pod \"60f684bd-051c-4608-8c11-1058cd2d6a01\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.157348 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-445hq\" (UniqueName: \"kubernetes.io/projected/60f684bd-051c-4608-8c11-1058cd2d6a01-kube-api-access-445hq\") pod \"60f684bd-051c-4608-8c11-1058cd2d6a01\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.157467 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-plugins\") pod \"60f684bd-051c-4608-8c11-1058cd2d6a01\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.157502 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-erlang-cookie\") pod \"60f684bd-051c-4608-8c11-1058cd2d6a01\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.157647 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/60f684bd-051c-4608-8c11-1058cd2d6a01-pod-info\") pod \"60f684bd-051c-4608-8c11-1058cd2d6a01\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.157678 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/60f684bd-051c-4608-8c11-1058cd2d6a01-server-conf\") pod \"60f684bd-051c-4608-8c11-1058cd2d6a01\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.157708 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-tls\") pod \"60f684bd-051c-4608-8c11-1058cd2d6a01\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.157750 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-confd\") pod \"60f684bd-051c-4608-8c11-1058cd2d6a01\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.157804 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/60f684bd-051c-4608-8c11-1058cd2d6a01-config-data\") pod \"60f684bd-051c-4608-8c11-1058cd2d6a01\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.157836 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/60f684bd-051c-4608-8c11-1058cd2d6a01-plugins-conf\") pod \"60f684bd-051c-4608-8c11-1058cd2d6a01\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.157860 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"60f684bd-051c-4608-8c11-1058cd2d6a01\" (UID: \"60f684bd-051c-4608-8c11-1058cd2d6a01\") " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.165670 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "60f684bd-051c-4608-8c11-1058cd2d6a01" (UID: "60f684bd-051c-4608-8c11-1058cd2d6a01"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.167421 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/60f684bd-051c-4608-8c11-1058cd2d6a01-pod-info" (OuterVolumeSpecName: "pod-info") pod "60f684bd-051c-4608-8c11-1058cd2d6a01" (UID: "60f684bd-051c-4608-8c11-1058cd2d6a01"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.168007 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "60f684bd-051c-4608-8c11-1058cd2d6a01" (UID: "60f684bd-051c-4608-8c11-1058cd2d6a01"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.168775 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60f684bd-051c-4608-8c11-1058cd2d6a01-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "60f684bd-051c-4608-8c11-1058cd2d6a01" (UID: "60f684bd-051c-4608-8c11-1058cd2d6a01"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.176980 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "60f684bd-051c-4608-8c11-1058cd2d6a01" (UID: "60f684bd-051c-4608-8c11-1058cd2d6a01"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.176783 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60f684bd-051c-4608-8c11-1058cd2d6a01-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "60f684bd-051c-4608-8c11-1058cd2d6a01" (UID: "60f684bd-051c-4608-8c11-1058cd2d6a01"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.186940 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60f684bd-051c-4608-8c11-1058cd2d6a01-kube-api-access-445hq" (OuterVolumeSpecName: "kube-api-access-445hq") pod "60f684bd-051c-4608-8c11-1058cd2d6a01" (UID: "60f684bd-051c-4608-8c11-1058cd2d6a01"). InnerVolumeSpecName "kube-api-access-445hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.213345 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "60f684bd-051c-4608-8c11-1058cd2d6a01" (UID: "60f684bd-051c-4608-8c11-1058cd2d6a01"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.256228 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60f684bd-051c-4608-8c11-1058cd2d6a01-server-conf" (OuterVolumeSpecName: "server-conf") pod "60f684bd-051c-4608-8c11-1058cd2d6a01" (UID: "60f684bd-051c-4608-8c11-1058cd2d6a01"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.260010 4955 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.260048 4955 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/60f684bd-051c-4608-8c11-1058cd2d6a01-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.260077 4955 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/60f684bd-051c-4608-8c11-1058cd2d6a01-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.260088 4955 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.260098 4955 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/60f684bd-051c-4608-8c11-1058cd2d6a01-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.260137 4955 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.260148 4955 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/60f684bd-051c-4608-8c11-1058cd2d6a01-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.260168 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-445hq\" (UniqueName: \"kubernetes.io/projected/60f684bd-051c-4608-8c11-1058cd2d6a01-kube-api-access-445hq\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.260178 4955 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.270942 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60f684bd-051c-4608-8c11-1058cd2d6a01-config-data" (OuterVolumeSpecName: "config-data") pod "60f684bd-051c-4608-8c11-1058cd2d6a01" (UID: "60f684bd-051c-4608-8c11-1058cd2d6a01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.290789 4955 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.363504 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/60f684bd-051c-4608-8c11-1058cd2d6a01-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.363536 4955 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.366416 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "60f684bd-051c-4608-8c11-1058cd2d6a01" (UID: "60f684bd-051c-4608-8c11-1058cd2d6a01"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.467890 4955 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/60f684bd-051c-4608-8c11-1058cd2d6a01-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.543069 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.570774 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-server-conf\") pod \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.570853 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-erlang-cookie\") pod \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.570886 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-pod-info\") pod \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.570923 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-plugins\") pod \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.571014 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-tls\") pod \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.571044 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-plugins-conf\") pod \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.571068 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.571095 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czcgr\" (UniqueName: \"kubernetes.io/projected/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-kube-api-access-czcgr\") pod \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.571194 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-config-data\") pod \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.571255 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-confd\") pod \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.571299 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-erlang-cookie-secret\") pod \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\" (UID: \"827225b6-1672-40b1-a9ee-7dd2d5db2d1d\") " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.572940 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "827225b6-1672-40b1-a9ee-7dd2d5db2d1d" (UID: "827225b6-1672-40b1-a9ee-7dd2d5db2d1d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.575572 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "827225b6-1672-40b1-a9ee-7dd2d5db2d1d" (UID: "827225b6-1672-40b1-a9ee-7dd2d5db2d1d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.580742 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "827225b6-1672-40b1-a9ee-7dd2d5db2d1d" (UID: "827225b6-1672-40b1-a9ee-7dd2d5db2d1d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.589052 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-pod-info" (OuterVolumeSpecName: "pod-info") pod "827225b6-1672-40b1-a9ee-7dd2d5db2d1d" (UID: "827225b6-1672-40b1-a9ee-7dd2d5db2d1d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.590806 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-kube-api-access-czcgr" (OuterVolumeSpecName: "kube-api-access-czcgr") pod "827225b6-1672-40b1-a9ee-7dd2d5db2d1d" (UID: "827225b6-1672-40b1-a9ee-7dd2d5db2d1d"). InnerVolumeSpecName "kube-api-access-czcgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.591922 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "827225b6-1672-40b1-a9ee-7dd2d5db2d1d" (UID: "827225b6-1672-40b1-a9ee-7dd2d5db2d1d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.593129 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "827225b6-1672-40b1-a9ee-7dd2d5db2d1d" (UID: "827225b6-1672-40b1-a9ee-7dd2d5db2d1d"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.593169 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "827225b6-1672-40b1-a9ee-7dd2d5db2d1d" (UID: "827225b6-1672-40b1-a9ee-7dd2d5db2d1d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.639518 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-config-data" (OuterVolumeSpecName: "config-data") pod "827225b6-1672-40b1-a9ee-7dd2d5db2d1d" (UID: "827225b6-1672-40b1-a9ee-7dd2d5db2d1d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.670279 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-server-conf" (OuterVolumeSpecName: "server-conf") pod "827225b6-1672-40b1-a9ee-7dd2d5db2d1d" (UID: "827225b6-1672-40b1-a9ee-7dd2d5db2d1d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.670437 4955 generic.go:334] "Generic (PLEG): container finished" podID="827225b6-1672-40b1-a9ee-7dd2d5db2d1d" containerID="b12b8e45ce088705b73de1339228a4a64a431496ca2972801565bfbdc998a238" exitCode=0 Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.670462 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.670484 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"827225b6-1672-40b1-a9ee-7dd2d5db2d1d","Type":"ContainerDied","Data":"b12b8e45ce088705b73de1339228a4a64a431496ca2972801565bfbdc998a238"} Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.670517 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"827225b6-1672-40b1-a9ee-7dd2d5db2d1d","Type":"ContainerDied","Data":"58f987034d906973bb20497c3fb808c9bd1776be3030461079d7dc94c82723ce"} Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.670533 4955 scope.go:117] "RemoveContainer" containerID="b12b8e45ce088705b73de1339228a4a64a431496ca2972801565bfbdc998a238" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.672825 4955 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.672846 4955 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.672856 4955 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.672867 4955 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.672875 4955 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.672884 4955 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.672912 4955 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.672922 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czcgr\" (UniqueName: \"kubernetes.io/projected/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-kube-api-access-czcgr\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.672931 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.672941 4955 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.676577 4955 generic.go:334] "Generic (PLEG): container finished" podID="60f684bd-051c-4608-8c11-1058cd2d6a01" containerID="50eb1031dbe225b15ca93c57f75d98168064e4aebf2f30baa5878661ad315c73" exitCode=0 Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.676616 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"60f684bd-051c-4608-8c11-1058cd2d6a01","Type":"ContainerDied","Data":"50eb1031dbe225b15ca93c57f75d98168064e4aebf2f30baa5878661ad315c73"} Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.676661 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"60f684bd-051c-4608-8c11-1058cd2d6a01","Type":"ContainerDied","Data":"506df5a909d842b35472b2c72ae9f3a9941b49d26f97a552aff8b740a39ee593"} Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.676712 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.715819 4955 scope.go:117] "RemoveContainer" containerID="216837147ce032310375d7abc39ae39496834919c5c5e8977850993a32772a6f" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.740450 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.748203 4955 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.749306 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "827225b6-1672-40b1-a9ee-7dd2d5db2d1d" (UID: "827225b6-1672-40b1-a9ee-7dd2d5db2d1d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.774205 4955 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.774257 4955 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/827225b6-1672-40b1-a9ee-7dd2d5db2d1d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.778183 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.793130 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 13:22:50 crc kubenswrapper[4955]: E0202 13:22:50.793731 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60f684bd-051c-4608-8c11-1058cd2d6a01" containerName="setup-container" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.793754 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="60f684bd-051c-4608-8c11-1058cd2d6a01" containerName="setup-container" Feb 02 13:22:50 crc kubenswrapper[4955]: E0202 13:22:50.793774 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="827225b6-1672-40b1-a9ee-7dd2d5db2d1d" containerName="setup-container" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.793783 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="827225b6-1672-40b1-a9ee-7dd2d5db2d1d" containerName="setup-container" Feb 02 13:22:50 crc kubenswrapper[4955]: E0202 13:22:50.793812 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60f684bd-051c-4608-8c11-1058cd2d6a01" containerName="rabbitmq" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.793820 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="60f684bd-051c-4608-8c11-1058cd2d6a01" containerName="rabbitmq" Feb 02 13:22:50 crc kubenswrapper[4955]: E0202 13:22:50.793833 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="827225b6-1672-40b1-a9ee-7dd2d5db2d1d" containerName="rabbitmq" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.793840 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="827225b6-1672-40b1-a9ee-7dd2d5db2d1d" containerName="rabbitmq" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.794083 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="827225b6-1672-40b1-a9ee-7dd2d5db2d1d" containerName="rabbitmq" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.794105 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="60f684bd-051c-4608-8c11-1058cd2d6a01" containerName="rabbitmq" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.795371 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.800905 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.800919 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.800923 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.800979 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.800996 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.801046 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.801968 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-x5n8j" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.804976 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.860547 4955 scope.go:117] "RemoveContainer" containerID="b12b8e45ce088705b73de1339228a4a64a431496ca2972801565bfbdc998a238" Feb 02 13:22:50 crc kubenswrapper[4955]: E0202 13:22:50.861153 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b12b8e45ce088705b73de1339228a4a64a431496ca2972801565bfbdc998a238\": container with ID starting with b12b8e45ce088705b73de1339228a4a64a431496ca2972801565bfbdc998a238 not found: ID does not exist" containerID="b12b8e45ce088705b73de1339228a4a64a431496ca2972801565bfbdc998a238" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.861217 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b12b8e45ce088705b73de1339228a4a64a431496ca2972801565bfbdc998a238"} err="failed to get container status \"b12b8e45ce088705b73de1339228a4a64a431496ca2972801565bfbdc998a238\": rpc error: code = NotFound desc = could not find container \"b12b8e45ce088705b73de1339228a4a64a431496ca2972801565bfbdc998a238\": container with ID starting with b12b8e45ce088705b73de1339228a4a64a431496ca2972801565bfbdc998a238 not found: ID does not exist" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.861247 4955 scope.go:117] "RemoveContainer" containerID="216837147ce032310375d7abc39ae39496834919c5c5e8977850993a32772a6f" Feb 02 13:22:50 crc kubenswrapper[4955]: E0202 13:22:50.861755 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"216837147ce032310375d7abc39ae39496834919c5c5e8977850993a32772a6f\": container with ID starting with 216837147ce032310375d7abc39ae39496834919c5c5e8977850993a32772a6f not found: ID does not exist" containerID="216837147ce032310375d7abc39ae39496834919c5c5e8977850993a32772a6f" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.861789 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"216837147ce032310375d7abc39ae39496834919c5c5e8977850993a32772a6f"} err="failed to get container status \"216837147ce032310375d7abc39ae39496834919c5c5e8977850993a32772a6f\": rpc error: code = NotFound desc = could not find container \"216837147ce032310375d7abc39ae39496834919c5c5e8977850993a32772a6f\": container with ID starting with 216837147ce032310375d7abc39ae39496834919c5c5e8977850993a32772a6f not found: ID does not exist" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.861812 4955 scope.go:117] "RemoveContainer" containerID="50eb1031dbe225b15ca93c57f75d98168064e4aebf2f30baa5878661ad315c73" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.914379 4955 scope.go:117] "RemoveContainer" containerID="7321d199cd79f17689d66004060101e7046bc926120421085d6c17ae90cbfbd1" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.945417 4955 scope.go:117] "RemoveContainer" containerID="50eb1031dbe225b15ca93c57f75d98168064e4aebf2f30baa5878661ad315c73" Feb 02 13:22:50 crc kubenswrapper[4955]: E0202 13:22:50.946028 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50eb1031dbe225b15ca93c57f75d98168064e4aebf2f30baa5878661ad315c73\": container with ID starting with 50eb1031dbe225b15ca93c57f75d98168064e4aebf2f30baa5878661ad315c73 not found: ID does not exist" containerID="50eb1031dbe225b15ca93c57f75d98168064e4aebf2f30baa5878661ad315c73" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.946400 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50eb1031dbe225b15ca93c57f75d98168064e4aebf2f30baa5878661ad315c73"} err="failed to get container status \"50eb1031dbe225b15ca93c57f75d98168064e4aebf2f30baa5878661ad315c73\": rpc error: code = NotFound desc = could not find container \"50eb1031dbe225b15ca93c57f75d98168064e4aebf2f30baa5878661ad315c73\": container with ID starting with 50eb1031dbe225b15ca93c57f75d98168064e4aebf2f30baa5878661ad315c73 not found: ID does not exist" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.946590 4955 scope.go:117] "RemoveContainer" containerID="7321d199cd79f17689d66004060101e7046bc926120421085d6c17ae90cbfbd1" Feb 02 13:22:50 crc kubenswrapper[4955]: E0202 13:22:50.947083 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7321d199cd79f17689d66004060101e7046bc926120421085d6c17ae90cbfbd1\": container with ID starting with 7321d199cd79f17689d66004060101e7046bc926120421085d6c17ae90cbfbd1 not found: ID does not exist" containerID="7321d199cd79f17689d66004060101e7046bc926120421085d6c17ae90cbfbd1" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.947180 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7321d199cd79f17689d66004060101e7046bc926120421085d6c17ae90cbfbd1"} err="failed to get container status \"7321d199cd79f17689d66004060101e7046bc926120421085d6c17ae90cbfbd1\": rpc error: code = NotFound desc = could not find container \"7321d199cd79f17689d66004060101e7046bc926120421085d6c17ae90cbfbd1\": container with ID starting with 7321d199cd79f17689d66004060101e7046bc926120421085d6c17ae90cbfbd1 not found: ID does not exist" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.977041 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/94b3af0f-2cf7-46b2-8558-fd172852d771-server-conf\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.977096 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/94b3af0f-2cf7-46b2-8558-fd172852d771-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.977135 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/94b3af0f-2cf7-46b2-8558-fd172852d771-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.977187 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/94b3af0f-2cf7-46b2-8558-fd172852d771-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.977220 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44xkb\" (UniqueName: \"kubernetes.io/projected/94b3af0f-2cf7-46b2-8558-fd172852d771-kube-api-access-44xkb\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.977246 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/94b3af0f-2cf7-46b2-8558-fd172852d771-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.977291 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/94b3af0f-2cf7-46b2-8558-fd172852d771-pod-info\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.977320 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/94b3af0f-2cf7-46b2-8558-fd172852d771-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.977339 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/94b3af0f-2cf7-46b2-8558-fd172852d771-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.977375 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:50 crc kubenswrapper[4955]: I0202 13:22:50.977406 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94b3af0f-2cf7-46b2-8558-fd172852d771-config-data\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.010814 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.019088 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.040939 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.042947 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.044818 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.045179 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.045207 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.045180 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ct2pn" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.045507 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.045826 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.046807 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.063153 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.079468 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/94b3af0f-2cf7-46b2-8558-fd172852d771-server-conf\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.079515 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/94b3af0f-2cf7-46b2-8558-fd172852d771-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.079546 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/94b3af0f-2cf7-46b2-8558-fd172852d771-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.079619 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/94b3af0f-2cf7-46b2-8558-fd172852d771-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.079657 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44xkb\" (UniqueName: \"kubernetes.io/projected/94b3af0f-2cf7-46b2-8558-fd172852d771-kube-api-access-44xkb\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.079677 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/94b3af0f-2cf7-46b2-8558-fd172852d771-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.079710 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/94b3af0f-2cf7-46b2-8558-fd172852d771-pod-info\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.079729 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/94b3af0f-2cf7-46b2-8558-fd172852d771-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.079746 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/94b3af0f-2cf7-46b2-8558-fd172852d771-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.079770 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.079791 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94b3af0f-2cf7-46b2-8558-fd172852d771-config-data\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.080495 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94b3af0f-2cf7-46b2-8558-fd172852d771-config-data\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.080863 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/94b3af0f-2cf7-46b2-8558-fd172852d771-server-conf\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.081425 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/94b3af0f-2cf7-46b2-8558-fd172852d771-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.082404 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/94b3af0f-2cf7-46b2-8558-fd172852d771-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.082969 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.083195 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/94b3af0f-2cf7-46b2-8558-fd172852d771-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.085977 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/94b3af0f-2cf7-46b2-8558-fd172852d771-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.086984 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/94b3af0f-2cf7-46b2-8558-fd172852d771-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.089833 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/94b3af0f-2cf7-46b2-8558-fd172852d771-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.098844 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/94b3af0f-2cf7-46b2-8558-fd172852d771-pod-info\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.107762 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44xkb\" (UniqueName: \"kubernetes.io/projected/94b3af0f-2cf7-46b2-8558-fd172852d771-kube-api-access-44xkb\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.118767 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"94b3af0f-2cf7-46b2-8558-fd172852d771\") " pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.181049 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bd53290f-8544-4e49-9f1b-8f3bc28332fc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.181363 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd53290f-8544-4e49-9f1b-8f3bc28332fc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.181689 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bd53290f-8544-4e49-9f1b-8f3bc28332fc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.181868 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bd53290f-8544-4e49-9f1b-8f3bc28332fc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.181985 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bd53290f-8544-4e49-9f1b-8f3bc28332fc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.182066 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bd53290f-8544-4e49-9f1b-8f3bc28332fc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.182151 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sthv5\" (UniqueName: \"kubernetes.io/projected/bd53290f-8544-4e49-9f1b-8f3bc28332fc-kube-api-access-sthv5\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.182247 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bd53290f-8544-4e49-9f1b-8f3bc28332fc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.182371 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.182470 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bd53290f-8544-4e49-9f1b-8f3bc28332fc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.182590 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bd53290f-8544-4e49-9f1b-8f3bc28332fc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.204050 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.291987 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bd53290f-8544-4e49-9f1b-8f3bc28332fc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.292050 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bd53290f-8544-4e49-9f1b-8f3bc28332fc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.292083 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bd53290f-8544-4e49-9f1b-8f3bc28332fc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.292113 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sthv5\" (UniqueName: \"kubernetes.io/projected/bd53290f-8544-4e49-9f1b-8f3bc28332fc-kube-api-access-sthv5\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.292135 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bd53290f-8544-4e49-9f1b-8f3bc28332fc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.292168 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bd53290f-8544-4e49-9f1b-8f3bc28332fc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.292231 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.292257 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bd53290f-8544-4e49-9f1b-8f3bc28332fc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.292295 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bd53290f-8544-4e49-9f1b-8f3bc28332fc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.292371 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bd53290f-8544-4e49-9f1b-8f3bc28332fc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.292396 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd53290f-8544-4e49-9f1b-8f3bc28332fc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.292923 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bd53290f-8544-4e49-9f1b-8f3bc28332fc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.294001 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bd53290f-8544-4e49-9f1b-8f3bc28332fc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.295010 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.296894 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd53290f-8544-4e49-9f1b-8f3bc28332fc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.297421 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bd53290f-8544-4e49-9f1b-8f3bc28332fc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.297956 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bd53290f-8544-4e49-9f1b-8f3bc28332fc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.298651 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bd53290f-8544-4e49-9f1b-8f3bc28332fc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.298999 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bd53290f-8544-4e49-9f1b-8f3bc28332fc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.304805 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bd53290f-8544-4e49-9f1b-8f3bc28332fc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.314171 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bd53290f-8544-4e49-9f1b-8f3bc28332fc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.319378 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sthv5\" (UniqueName: \"kubernetes.io/projected/bd53290f-8544-4e49-9f1b-8f3bc28332fc-kube-api-access-sthv5\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.354347 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bd53290f-8544-4e49-9f1b-8f3bc28332fc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.366050 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.655481 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.690351 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"94b3af0f-2cf7-46b2-8558-fd172852d771","Type":"ContainerStarted","Data":"73abc16264466af06adfc1392efb9a455dfe7503c23aa59d6f61b402ca18cd21"} Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.728341 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60f684bd-051c-4608-8c11-1058cd2d6a01" path="/var/lib/kubelet/pods/60f684bd-051c-4608-8c11-1058cd2d6a01/volumes" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.729271 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="827225b6-1672-40b1-a9ee-7dd2d5db2d1d" path="/var/lib/kubelet/pods/827225b6-1672-40b1-a9ee-7dd2d5db2d1d/volumes" Feb 02 13:22:51 crc kubenswrapper[4955]: I0202 13:22:51.878707 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 13:22:52 crc kubenswrapper[4955]: I0202 13:22:52.699755 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bd53290f-8544-4e49-9f1b-8f3bc28332fc","Type":"ContainerStarted","Data":"389ec8c46a337f253ad7f03659a17efb71a87155195296c22db45ddf746d859b"} Feb 02 13:22:52 crc kubenswrapper[4955]: I0202 13:22:52.947577 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-ptvtw"] Feb 02 13:22:52 crc kubenswrapper[4955]: I0202 13:22:52.949202 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:52 crc kubenswrapper[4955]: I0202 13:22:52.952788 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 02 13:22:52 crc kubenswrapper[4955]: I0202 13:22:52.956958 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-ptvtw"] Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.130153 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-ptvtw\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.130405 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5n7s\" (UniqueName: \"kubernetes.io/projected/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-kube-api-access-d5n7s\") pod \"dnsmasq-dns-7d84b4d45c-ptvtw\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.130493 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-ptvtw\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.130589 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-config\") pod \"dnsmasq-dns-7d84b4d45c-ptvtw\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.130640 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-ptvtw\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.130792 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-ptvtw\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.130944 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-ptvtw\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.233175 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-ptvtw\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.233683 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-ptvtw\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.233787 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-ptvtw\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.233964 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5n7s\" (UniqueName: \"kubernetes.io/projected/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-kube-api-access-d5n7s\") pod \"dnsmasq-dns-7d84b4d45c-ptvtw\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.234030 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-ptvtw\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.234097 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-config\") pod \"dnsmasq-dns-7d84b4d45c-ptvtw\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.234128 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-ptvtw\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.234833 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-ptvtw\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.235205 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-config\") pod \"dnsmasq-dns-7d84b4d45c-ptvtw\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.235307 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-ptvtw\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.235345 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-ptvtw\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.235367 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-ptvtw\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.237541 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-ptvtw\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.261889 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5n7s\" (UniqueName: \"kubernetes.io/projected/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-kube-api-access-d5n7s\") pod \"dnsmasq-dns-7d84b4d45c-ptvtw\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.278262 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.709399 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bd53290f-8544-4e49-9f1b-8f3bc28332fc","Type":"ContainerStarted","Data":"d753b81311a66bcd1d10a5c4c9bba066b7c48acfcdcf9653804551856b06ac97"} Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.712632 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"94b3af0f-2cf7-46b2-8558-fd172852d771","Type":"ContainerStarted","Data":"755e55902b2ae3b4381a45f187386f756c0984f48776ee368183ba48e95ef261"} Feb 02 13:22:53 crc kubenswrapper[4955]: W0202 13:22:53.722902 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod687804d6_a6f1_4a94_9c7f_4c822a75e7e8.slice/crio-9de9eabc093df4674782c4097fc292bc3f6edcf3f1039826ef8b89409947910d WatchSource:0}: Error finding container 9de9eabc093df4674782c4097fc292bc3f6edcf3f1039826ef8b89409947910d: Status 404 returned error can't find the container with id 9de9eabc093df4674782c4097fc292bc3f6edcf3f1039826ef8b89409947910d Feb 02 13:22:53 crc kubenswrapper[4955]: I0202 13:22:53.726593 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-ptvtw"] Feb 02 13:22:54 crc kubenswrapper[4955]: I0202 13:22:54.729078 4955 generic.go:334] "Generic (PLEG): container finished" podID="687804d6-a6f1-4a94-9c7f-4c822a75e7e8" containerID="b63ca48eccc375b553e76f28863e673078c9421ecd6668c419841ef5ed392435" exitCode=0 Feb 02 13:22:54 crc kubenswrapper[4955]: I0202 13:22:54.729166 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" event={"ID":"687804d6-a6f1-4a94-9c7f-4c822a75e7e8","Type":"ContainerDied","Data":"b63ca48eccc375b553e76f28863e673078c9421ecd6668c419841ef5ed392435"} Feb 02 13:22:54 crc kubenswrapper[4955]: I0202 13:22:54.729511 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" event={"ID":"687804d6-a6f1-4a94-9c7f-4c822a75e7e8","Type":"ContainerStarted","Data":"9de9eabc093df4674782c4097fc292bc3f6edcf3f1039826ef8b89409947910d"} Feb 02 13:22:55 crc kubenswrapper[4955]: I0202 13:22:55.739361 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" event={"ID":"687804d6-a6f1-4a94-9c7f-4c822a75e7e8","Type":"ContainerStarted","Data":"5dce9baa2db15ffe036849c3b233d4e5e4f3aa7a36f660dc6f38d59aec7ffaa0"} Feb 02 13:22:55 crc kubenswrapper[4955]: I0202 13:22:55.739754 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:22:55 crc kubenswrapper[4955]: I0202 13:22:55.776840 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" podStartSLOduration=3.776813563 podStartE2EDuration="3.776813563s" podCreationTimestamp="2026-02-02 13:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:22:55.764577715 +0000 UTC m=+1226.676914205" watchObservedRunningTime="2026-02-02 13:22:55.776813563 +0000 UTC m=+1226.689150043" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.279804 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.339364 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5"] Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.339589 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" podUID="0ff9b0c5-ac10-49d8-8876-f605852f490d" containerName="dnsmasq-dns" containerID="cri-o://91d8d9e707711fcae426e8b170cb80981d57c808662fb514ea1b081fb8151a98" gracePeriod=10 Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.494480 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-f64xq"] Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.498027 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.515396 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-f64xq"] Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.551757 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/daa79467-2e3c-4fa1-b8d0-ca5af14ed437-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-f64xq\" (UID: \"daa79467-2e3c-4fa1-b8d0-ca5af14ed437\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.551804 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/daa79467-2e3c-4fa1-b8d0-ca5af14ed437-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-f64xq\" (UID: \"daa79467-2e3c-4fa1-b8d0-ca5af14ed437\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.551871 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64gmm\" (UniqueName: \"kubernetes.io/projected/daa79467-2e3c-4fa1-b8d0-ca5af14ed437-kube-api-access-64gmm\") pod \"dnsmasq-dns-6f6df4f56c-f64xq\" (UID: \"daa79467-2e3c-4fa1-b8d0-ca5af14ed437\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.551898 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daa79467-2e3c-4fa1-b8d0-ca5af14ed437-config\") pod \"dnsmasq-dns-6f6df4f56c-f64xq\" (UID: \"daa79467-2e3c-4fa1-b8d0-ca5af14ed437\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.551920 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/daa79467-2e3c-4fa1-b8d0-ca5af14ed437-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-f64xq\" (UID: \"daa79467-2e3c-4fa1-b8d0-ca5af14ed437\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.551938 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/daa79467-2e3c-4fa1-b8d0-ca5af14ed437-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-f64xq\" (UID: \"daa79467-2e3c-4fa1-b8d0-ca5af14ed437\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.552001 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/daa79467-2e3c-4fa1-b8d0-ca5af14ed437-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-f64xq\" (UID: \"daa79467-2e3c-4fa1-b8d0-ca5af14ed437\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.653556 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/daa79467-2e3c-4fa1-b8d0-ca5af14ed437-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-f64xq\" (UID: \"daa79467-2e3c-4fa1-b8d0-ca5af14ed437\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.653662 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64gmm\" (UniqueName: \"kubernetes.io/projected/daa79467-2e3c-4fa1-b8d0-ca5af14ed437-kube-api-access-64gmm\") pod \"dnsmasq-dns-6f6df4f56c-f64xq\" (UID: \"daa79467-2e3c-4fa1-b8d0-ca5af14ed437\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.653692 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/daa79467-2e3c-4fa1-b8d0-ca5af14ed437-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-f64xq\" (UID: \"daa79467-2e3c-4fa1-b8d0-ca5af14ed437\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.653711 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daa79467-2e3c-4fa1-b8d0-ca5af14ed437-config\") pod \"dnsmasq-dns-6f6df4f56c-f64xq\" (UID: \"daa79467-2e3c-4fa1-b8d0-ca5af14ed437\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.653733 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/daa79467-2e3c-4fa1-b8d0-ca5af14ed437-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-f64xq\" (UID: \"daa79467-2e3c-4fa1-b8d0-ca5af14ed437\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.653795 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/daa79467-2e3c-4fa1-b8d0-ca5af14ed437-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-f64xq\" (UID: \"daa79467-2e3c-4fa1-b8d0-ca5af14ed437\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.653840 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/daa79467-2e3c-4fa1-b8d0-ca5af14ed437-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-f64xq\" (UID: \"daa79467-2e3c-4fa1-b8d0-ca5af14ed437\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.654451 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/daa79467-2e3c-4fa1-b8d0-ca5af14ed437-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-f64xq\" (UID: \"daa79467-2e3c-4fa1-b8d0-ca5af14ed437\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.654598 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/daa79467-2e3c-4fa1-b8d0-ca5af14ed437-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-f64xq\" (UID: \"daa79467-2e3c-4fa1-b8d0-ca5af14ed437\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.654930 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/daa79467-2e3c-4fa1-b8d0-ca5af14ed437-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-f64xq\" (UID: \"daa79467-2e3c-4fa1-b8d0-ca5af14ed437\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.654973 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/daa79467-2e3c-4fa1-b8d0-ca5af14ed437-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-f64xq\" (UID: \"daa79467-2e3c-4fa1-b8d0-ca5af14ed437\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.655246 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/daa79467-2e3c-4fa1-b8d0-ca5af14ed437-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-f64xq\" (UID: \"daa79467-2e3c-4fa1-b8d0-ca5af14ed437\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.655749 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daa79467-2e3c-4fa1-b8d0-ca5af14ed437-config\") pod \"dnsmasq-dns-6f6df4f56c-f64xq\" (UID: \"daa79467-2e3c-4fa1-b8d0-ca5af14ed437\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.676088 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64gmm\" (UniqueName: \"kubernetes.io/projected/daa79467-2e3c-4fa1-b8d0-ca5af14ed437-kube-api-access-64gmm\") pod \"dnsmasq-dns-6f6df4f56c-f64xq\" (UID: \"daa79467-2e3c-4fa1-b8d0-ca5af14ed437\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.813998 4955 generic.go:334] "Generic (PLEG): container finished" podID="0ff9b0c5-ac10-49d8-8876-f605852f490d" containerID="91d8d9e707711fcae426e8b170cb80981d57c808662fb514ea1b081fb8151a98" exitCode=0 Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.814426 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" event={"ID":"0ff9b0c5-ac10-49d8-8876-f605852f490d","Type":"ContainerDied","Data":"91d8d9e707711fcae426e8b170cb80981d57c808662fb514ea1b081fb8151a98"} Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.814531 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" event={"ID":"0ff9b0c5-ac10-49d8-8876-f605852f490d","Type":"ContainerDied","Data":"ec1dbf3910ae5d5c45344e761c99dbea9d9b99f524b280b5b1c6f25a5feebab8"} Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.814640 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec1dbf3910ae5d5c45344e761c99dbea9d9b99f524b280b5b1c6f25a5feebab8" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.841219 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:03 crc kubenswrapper[4955]: I0202 13:23:03.938413 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:23:04 crc kubenswrapper[4955]: I0202 13:23:04.063297 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-config\") pod \"0ff9b0c5-ac10-49d8-8876-f605852f490d\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " Feb 02 13:23:04 crc kubenswrapper[4955]: I0202 13:23:04.063456 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vx77\" (UniqueName: \"kubernetes.io/projected/0ff9b0c5-ac10-49d8-8876-f605852f490d-kube-api-access-2vx77\") pod \"0ff9b0c5-ac10-49d8-8876-f605852f490d\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " Feb 02 13:23:04 crc kubenswrapper[4955]: I0202 13:23:04.063519 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-dns-svc\") pod \"0ff9b0c5-ac10-49d8-8876-f605852f490d\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " Feb 02 13:23:04 crc kubenswrapper[4955]: I0202 13:23:04.063602 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-ovsdbserver-nb\") pod \"0ff9b0c5-ac10-49d8-8876-f605852f490d\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " Feb 02 13:23:04 crc kubenswrapper[4955]: I0202 13:23:04.063628 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-ovsdbserver-sb\") pod \"0ff9b0c5-ac10-49d8-8876-f605852f490d\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " Feb 02 13:23:04 crc kubenswrapper[4955]: I0202 13:23:04.063731 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-dns-swift-storage-0\") pod \"0ff9b0c5-ac10-49d8-8876-f605852f490d\" (UID: \"0ff9b0c5-ac10-49d8-8876-f605852f490d\") " Feb 02 13:23:04 crc kubenswrapper[4955]: I0202 13:23:04.068878 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ff9b0c5-ac10-49d8-8876-f605852f490d-kube-api-access-2vx77" (OuterVolumeSpecName: "kube-api-access-2vx77") pod "0ff9b0c5-ac10-49d8-8876-f605852f490d" (UID: "0ff9b0c5-ac10-49d8-8876-f605852f490d"). InnerVolumeSpecName "kube-api-access-2vx77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:04 crc kubenswrapper[4955]: I0202 13:23:04.109417 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0ff9b0c5-ac10-49d8-8876-f605852f490d" (UID: "0ff9b0c5-ac10-49d8-8876-f605852f490d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:04 crc kubenswrapper[4955]: I0202 13:23:04.109694 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0ff9b0c5-ac10-49d8-8876-f605852f490d" (UID: "0ff9b0c5-ac10-49d8-8876-f605852f490d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:04 crc kubenswrapper[4955]: I0202 13:23:04.115187 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0ff9b0c5-ac10-49d8-8876-f605852f490d" (UID: "0ff9b0c5-ac10-49d8-8876-f605852f490d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:04 crc kubenswrapper[4955]: I0202 13:23:04.123603 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0ff9b0c5-ac10-49d8-8876-f605852f490d" (UID: "0ff9b0c5-ac10-49d8-8876-f605852f490d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:04 crc kubenswrapper[4955]: I0202 13:23:04.126004 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-config" (OuterVolumeSpecName: "config") pod "0ff9b0c5-ac10-49d8-8876-f605852f490d" (UID: "0ff9b0c5-ac10-49d8-8876-f605852f490d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:04 crc kubenswrapper[4955]: I0202 13:23:04.165819 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:04 crc kubenswrapper[4955]: I0202 13:23:04.165856 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:04 crc kubenswrapper[4955]: I0202 13:23:04.165866 4955 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:04 crc kubenswrapper[4955]: I0202 13:23:04.165876 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:04 crc kubenswrapper[4955]: I0202 13:23:04.165885 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vx77\" (UniqueName: \"kubernetes.io/projected/0ff9b0c5-ac10-49d8-8876-f605852f490d-kube-api-access-2vx77\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:04 crc kubenswrapper[4955]: I0202 13:23:04.165896 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ff9b0c5-ac10-49d8-8876-f605852f490d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:04 crc kubenswrapper[4955]: I0202 13:23:04.290365 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-f64xq"] Feb 02 13:23:04 crc kubenswrapper[4955]: W0202 13:23:04.292707 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaa79467_2e3c_4fa1_b8d0_ca5af14ed437.slice/crio-06c708557c5d7f62b01bdbde8bd6ba6883f60e483ff992bc225974030fa487d5 WatchSource:0}: Error finding container 06c708557c5d7f62b01bdbde8bd6ba6883f60e483ff992bc225974030fa487d5: Status 404 returned error can't find the container with id 06c708557c5d7f62b01bdbde8bd6ba6883f60e483ff992bc225974030fa487d5 Feb 02 13:23:04 crc kubenswrapper[4955]: I0202 13:23:04.823819 4955 generic.go:334] "Generic (PLEG): container finished" podID="daa79467-2e3c-4fa1-b8d0-ca5af14ed437" containerID="abc589808696f39463cddc7e1031a4cd95808ba932961546d9552945260caec6" exitCode=0 Feb 02 13:23:04 crc kubenswrapper[4955]: I0202 13:23:04.823932 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" event={"ID":"daa79467-2e3c-4fa1-b8d0-ca5af14ed437","Type":"ContainerDied","Data":"abc589808696f39463cddc7e1031a4cd95808ba932961546d9552945260caec6"} Feb 02 13:23:04 crc kubenswrapper[4955]: I0202 13:23:04.824503 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" event={"ID":"daa79467-2e3c-4fa1-b8d0-ca5af14ed437","Type":"ContainerStarted","Data":"06c708557c5d7f62b01bdbde8bd6ba6883f60e483ff992bc225974030fa487d5"} Feb 02 13:23:04 crc kubenswrapper[4955]: I0202 13:23:04.824625 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5" Feb 02 13:23:05 crc kubenswrapper[4955]: I0202 13:23:05.004085 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5"] Feb 02 13:23:05 crc kubenswrapper[4955]: I0202 13:23:05.013328 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-4qvx5"] Feb 02 13:23:05 crc kubenswrapper[4955]: I0202 13:23:05.727606 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ff9b0c5-ac10-49d8-8876-f605852f490d" path="/var/lib/kubelet/pods/0ff9b0c5-ac10-49d8-8876-f605852f490d/volumes" Feb 02 13:23:05 crc kubenswrapper[4955]: I0202 13:23:05.834354 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" event={"ID":"daa79467-2e3c-4fa1-b8d0-ca5af14ed437","Type":"ContainerStarted","Data":"be2899a48b9fdc14c783c5958f56cab78077448f8f8cf8161ca20cdc8301e4c5"} Feb 02 13:23:05 crc kubenswrapper[4955]: I0202 13:23:05.834643 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:05 crc kubenswrapper[4955]: I0202 13:23:05.863262 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" podStartSLOduration=2.863243529 podStartE2EDuration="2.863243529s" podCreationTimestamp="2026-02-02 13:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:23:05.854009435 +0000 UTC m=+1236.766345885" watchObservedRunningTime="2026-02-02 13:23:05.863243529 +0000 UTC m=+1236.775579979" Feb 02 13:23:13 crc kubenswrapper[4955]: I0202 13:23:13.843142 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-f64xq" Feb 02 13:23:13 crc kubenswrapper[4955]: I0202 13:23:13.904989 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-ptvtw"] Feb 02 13:23:13 crc kubenswrapper[4955]: I0202 13:23:13.905323 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" podUID="687804d6-a6f1-4a94-9c7f-4c822a75e7e8" containerName="dnsmasq-dns" containerID="cri-o://5dce9baa2db15ffe036849c3b233d4e5e4f3aa7a36f660dc6f38d59aec7ffaa0" gracePeriod=10 Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.446428 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.553891 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5n7s\" (UniqueName: \"kubernetes.io/projected/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-kube-api-access-d5n7s\") pod \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.553965 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-openstack-edpm-ipam\") pod \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.554003 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-config\") pod \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.554023 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-dns-swift-storage-0\") pod \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.554085 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-ovsdbserver-sb\") pod \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.554864 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-ovsdbserver-nb\") pod \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.554893 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-dns-svc\") pod \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\" (UID: \"687804d6-a6f1-4a94-9c7f-4c822a75e7e8\") " Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.559916 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-kube-api-access-d5n7s" (OuterVolumeSpecName: "kube-api-access-d5n7s") pod "687804d6-a6f1-4a94-9c7f-4c822a75e7e8" (UID: "687804d6-a6f1-4a94-9c7f-4c822a75e7e8"). InnerVolumeSpecName "kube-api-access-d5n7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.615178 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "687804d6-a6f1-4a94-9c7f-4c822a75e7e8" (UID: "687804d6-a6f1-4a94-9c7f-4c822a75e7e8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.618014 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "687804d6-a6f1-4a94-9c7f-4c822a75e7e8" (UID: "687804d6-a6f1-4a94-9c7f-4c822a75e7e8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.624664 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "687804d6-a6f1-4a94-9c7f-4c822a75e7e8" (UID: "687804d6-a6f1-4a94-9c7f-4c822a75e7e8"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.628733 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "687804d6-a6f1-4a94-9c7f-4c822a75e7e8" (UID: "687804d6-a6f1-4a94-9c7f-4c822a75e7e8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.635848 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-config" (OuterVolumeSpecName: "config") pod "687804d6-a6f1-4a94-9c7f-4c822a75e7e8" (UID: "687804d6-a6f1-4a94-9c7f-4c822a75e7e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.645594 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "687804d6-a6f1-4a94-9c7f-4c822a75e7e8" (UID: "687804d6-a6f1-4a94-9c7f-4c822a75e7e8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.657806 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.657849 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.657865 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5n7s\" (UniqueName: \"kubernetes.io/projected/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-kube-api-access-d5n7s\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.657881 4955 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.657895 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.657905 4955 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.657916 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/687804d6-a6f1-4a94-9c7f-4c822a75e7e8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.923497 4955 generic.go:334] "Generic (PLEG): container finished" podID="687804d6-a6f1-4a94-9c7f-4c822a75e7e8" containerID="5dce9baa2db15ffe036849c3b233d4e5e4f3aa7a36f660dc6f38d59aec7ffaa0" exitCode=0 Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.923565 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" event={"ID":"687804d6-a6f1-4a94-9c7f-4c822a75e7e8","Type":"ContainerDied","Data":"5dce9baa2db15ffe036849c3b233d4e5e4f3aa7a36f660dc6f38d59aec7ffaa0"} Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.923791 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" event={"ID":"687804d6-a6f1-4a94-9c7f-4c822a75e7e8","Type":"ContainerDied","Data":"9de9eabc093df4674782c4097fc292bc3f6edcf3f1039826ef8b89409947910d"} Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.923812 4955 scope.go:117] "RemoveContainer" containerID="5dce9baa2db15ffe036849c3b233d4e5e4f3aa7a36f660dc6f38d59aec7ffaa0" Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.923609 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-ptvtw" Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.944595 4955 scope.go:117] "RemoveContainer" containerID="b63ca48eccc375b553e76f28863e673078c9421ecd6668c419841ef5ed392435" Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.960939 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-ptvtw"] Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.972174 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-ptvtw"] Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.978770 4955 scope.go:117] "RemoveContainer" containerID="5dce9baa2db15ffe036849c3b233d4e5e4f3aa7a36f660dc6f38d59aec7ffaa0" Feb 02 13:23:14 crc kubenswrapper[4955]: E0202 13:23:14.979257 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dce9baa2db15ffe036849c3b233d4e5e4f3aa7a36f660dc6f38d59aec7ffaa0\": container with ID starting with 5dce9baa2db15ffe036849c3b233d4e5e4f3aa7a36f660dc6f38d59aec7ffaa0 not found: ID does not exist" containerID="5dce9baa2db15ffe036849c3b233d4e5e4f3aa7a36f660dc6f38d59aec7ffaa0" Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.979302 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dce9baa2db15ffe036849c3b233d4e5e4f3aa7a36f660dc6f38d59aec7ffaa0"} err="failed to get container status \"5dce9baa2db15ffe036849c3b233d4e5e4f3aa7a36f660dc6f38d59aec7ffaa0\": rpc error: code = NotFound desc = could not find container \"5dce9baa2db15ffe036849c3b233d4e5e4f3aa7a36f660dc6f38d59aec7ffaa0\": container with ID starting with 5dce9baa2db15ffe036849c3b233d4e5e4f3aa7a36f660dc6f38d59aec7ffaa0 not found: ID does not exist" Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.979333 4955 scope.go:117] "RemoveContainer" containerID="b63ca48eccc375b553e76f28863e673078c9421ecd6668c419841ef5ed392435" Feb 02 13:23:14 crc kubenswrapper[4955]: E0202 13:23:14.979699 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b63ca48eccc375b553e76f28863e673078c9421ecd6668c419841ef5ed392435\": container with ID starting with b63ca48eccc375b553e76f28863e673078c9421ecd6668c419841ef5ed392435 not found: ID does not exist" containerID="b63ca48eccc375b553e76f28863e673078c9421ecd6668c419841ef5ed392435" Feb 02 13:23:14 crc kubenswrapper[4955]: I0202 13:23:14.979739 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b63ca48eccc375b553e76f28863e673078c9421ecd6668c419841ef5ed392435"} err="failed to get container status \"b63ca48eccc375b553e76f28863e673078c9421ecd6668c419841ef5ed392435\": rpc error: code = NotFound desc = could not find container \"b63ca48eccc375b553e76f28863e673078c9421ecd6668c419841ef5ed392435\": container with ID starting with b63ca48eccc375b553e76f28863e673078c9421ecd6668c419841ef5ed392435 not found: ID does not exist" Feb 02 13:23:15 crc kubenswrapper[4955]: I0202 13:23:15.727212 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="687804d6-a6f1-4a94-9c7f-4c822a75e7e8" path="/var/lib/kubelet/pods/687804d6-a6f1-4a94-9c7f-4c822a75e7e8/volumes" Feb 02 13:23:26 crc kubenswrapper[4955]: I0202 13:23:26.018976 4955 generic.go:334] "Generic (PLEG): container finished" podID="94b3af0f-2cf7-46b2-8558-fd172852d771" containerID="755e55902b2ae3b4381a45f187386f756c0984f48776ee368183ba48e95ef261" exitCode=0 Feb 02 13:23:26 crc kubenswrapper[4955]: I0202 13:23:26.019111 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"94b3af0f-2cf7-46b2-8558-fd172852d771","Type":"ContainerDied","Data":"755e55902b2ae3b4381a45f187386f756c0984f48776ee368183ba48e95ef261"} Feb 02 13:23:26 crc kubenswrapper[4955]: I0202 13:23:26.022030 4955 generic.go:334] "Generic (PLEG): container finished" podID="bd53290f-8544-4e49-9f1b-8f3bc28332fc" containerID="d753b81311a66bcd1d10a5c4c9bba066b7c48acfcdcf9653804551856b06ac97" exitCode=0 Feb 02 13:23:26 crc kubenswrapper[4955]: I0202 13:23:26.022080 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bd53290f-8544-4e49-9f1b-8f3bc28332fc","Type":"ContainerDied","Data":"d753b81311a66bcd1d10a5c4c9bba066b7c48acfcdcf9653804551856b06ac97"} Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.033825 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"94b3af0f-2cf7-46b2-8558-fd172852d771","Type":"ContainerStarted","Data":"47926de08eae3dde28629cbf9d9785f94039d3d383656762ae546b138299612c"} Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.036083 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.038935 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bd53290f-8544-4e49-9f1b-8f3bc28332fc","Type":"ContainerStarted","Data":"02aaf5bc896a36ea16fc76ef30045984eb52a3a82200702dd460fb275cc39ec1"} Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.039766 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.138350 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.138330464 podStartE2EDuration="37.138330464s" podCreationTimestamp="2026-02-02 13:22:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:23:27.066337236 +0000 UTC m=+1257.978673706" watchObservedRunningTime="2026-02-02 13:23:27.138330464 +0000 UTC m=+1258.050666924" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.143414 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.143390557 podStartE2EDuration="36.143390557s" podCreationTimestamp="2026-02-02 13:22:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:23:27.133454655 +0000 UTC m=+1258.045791136" watchObservedRunningTime="2026-02-02 13:23:27.143390557 +0000 UTC m=+1258.055727017" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.173122 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q"] Feb 02 13:23:27 crc kubenswrapper[4955]: E0202 13:23:27.176190 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687804d6-a6f1-4a94-9c7f-4c822a75e7e8" containerName="dnsmasq-dns" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.176226 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="687804d6-a6f1-4a94-9c7f-4c822a75e7e8" containerName="dnsmasq-dns" Feb 02 13:23:27 crc kubenswrapper[4955]: E0202 13:23:27.176252 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ff9b0c5-ac10-49d8-8876-f605852f490d" containerName="dnsmasq-dns" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.176258 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ff9b0c5-ac10-49d8-8876-f605852f490d" containerName="dnsmasq-dns" Feb 02 13:23:27 crc kubenswrapper[4955]: E0202 13:23:27.176277 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ff9b0c5-ac10-49d8-8876-f605852f490d" containerName="init" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.176283 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ff9b0c5-ac10-49d8-8876-f605852f490d" containerName="init" Feb 02 13:23:27 crc kubenswrapper[4955]: E0202 13:23:27.176295 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687804d6-a6f1-4a94-9c7f-4c822a75e7e8" containerName="init" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.176301 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="687804d6-a6f1-4a94-9c7f-4c822a75e7e8" containerName="init" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.176517 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ff9b0c5-ac10-49d8-8876-f605852f490d" containerName="dnsmasq-dns" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.176534 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="687804d6-a6f1-4a94-9c7f-4c822a75e7e8" containerName="dnsmasq-dns" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.177213 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.181118 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.181315 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.181520 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.181742 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-65wvh" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.183313 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55ckd\" (UniqueName: \"kubernetes.io/projected/e45f9d86-932f-4539-a7f3-f302ae6dfd53-kube-api-access-55ckd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q\" (UID: \"e45f9d86-932f-4539-a7f3-f302ae6dfd53\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.183420 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45f9d86-932f-4539-a7f3-f302ae6dfd53-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q\" (UID: \"e45f9d86-932f-4539-a7f3-f302ae6dfd53\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.183507 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e45f9d86-932f-4539-a7f3-f302ae6dfd53-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q\" (UID: \"e45f9d86-932f-4539-a7f3-f302ae6dfd53\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.183580 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e45f9d86-932f-4539-a7f3-f302ae6dfd53-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q\" (UID: \"e45f9d86-932f-4539-a7f3-f302ae6dfd53\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.188875 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q"] Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.284913 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e45f9d86-932f-4539-a7f3-f302ae6dfd53-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q\" (UID: \"e45f9d86-932f-4539-a7f3-f302ae6dfd53\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.284996 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e45f9d86-932f-4539-a7f3-f302ae6dfd53-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q\" (UID: \"e45f9d86-932f-4539-a7f3-f302ae6dfd53\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.285657 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55ckd\" (UniqueName: \"kubernetes.io/projected/e45f9d86-932f-4539-a7f3-f302ae6dfd53-kube-api-access-55ckd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q\" (UID: \"e45f9d86-932f-4539-a7f3-f302ae6dfd53\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.285767 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45f9d86-932f-4539-a7f3-f302ae6dfd53-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q\" (UID: \"e45f9d86-932f-4539-a7f3-f302ae6dfd53\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.290154 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e45f9d86-932f-4539-a7f3-f302ae6dfd53-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q\" (UID: \"e45f9d86-932f-4539-a7f3-f302ae6dfd53\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.290257 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45f9d86-932f-4539-a7f3-f302ae6dfd53-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q\" (UID: \"e45f9d86-932f-4539-a7f3-f302ae6dfd53\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.291053 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e45f9d86-932f-4539-a7f3-f302ae6dfd53-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q\" (UID: \"e45f9d86-932f-4539-a7f3-f302ae6dfd53\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.306185 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55ckd\" (UniqueName: \"kubernetes.io/projected/e45f9d86-932f-4539-a7f3-f302ae6dfd53-kube-api-access-55ckd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q\" (UID: \"e45f9d86-932f-4539-a7f3-f302ae6dfd53\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q" Feb 02 13:23:27 crc kubenswrapper[4955]: I0202 13:23:27.505082 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q" Feb 02 13:23:28 crc kubenswrapper[4955]: W0202 13:23:28.082815 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode45f9d86_932f_4539_a7f3_f302ae6dfd53.slice/crio-2b963104c550b84f7ccaf40f25690254d22ebe194c8426b1e580c0821818fa73 WatchSource:0}: Error finding container 2b963104c550b84f7ccaf40f25690254d22ebe194c8426b1e580c0821818fa73: Status 404 returned error can't find the container with id 2b963104c550b84f7ccaf40f25690254d22ebe194c8426b1e580c0821818fa73 Feb 02 13:23:28 crc kubenswrapper[4955]: I0202 13:23:28.088457 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q"] Feb 02 13:23:29 crc kubenswrapper[4955]: I0202 13:23:29.058413 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q" event={"ID":"e45f9d86-932f-4539-a7f3-f302ae6dfd53","Type":"ContainerStarted","Data":"2b963104c550b84f7ccaf40f25690254d22ebe194c8426b1e580c0821818fa73"} Feb 02 13:23:33 crc kubenswrapper[4955]: I0202 13:23:33.017361 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:23:33 crc kubenswrapper[4955]: I0202 13:23:33.017910 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:23:37 crc kubenswrapper[4955]: I0202 13:23:37.827770 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 13:23:38 crc kubenswrapper[4955]: I0202 13:23:38.154471 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q" event={"ID":"e45f9d86-932f-4539-a7f3-f302ae6dfd53","Type":"ContainerStarted","Data":"9529362764a3ed2511536857cb63ed4c4a7985320463c8b1094e3e1259992855"} Feb 02 13:23:38 crc kubenswrapper[4955]: I0202 13:23:38.176571 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q" podStartSLOduration=1.438768881 podStartE2EDuration="11.176533174s" podCreationTimestamp="2026-02-02 13:23:27 +0000 UTC" firstStartedPulling="2026-02-02 13:23:28.087153306 +0000 UTC m=+1258.999489756" lastFinishedPulling="2026-02-02 13:23:37.824917599 +0000 UTC m=+1268.737254049" observedRunningTime="2026-02-02 13:23:38.172788794 +0000 UTC m=+1269.085125254" watchObservedRunningTime="2026-02-02 13:23:38.176533174 +0000 UTC m=+1269.088869624" Feb 02 13:23:41 crc kubenswrapper[4955]: I0202 13:23:41.205697 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 02 13:23:41 crc kubenswrapper[4955]: I0202 13:23:41.369236 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:23:50 crc kubenswrapper[4955]: I0202 13:23:50.552171 4955 generic.go:334] "Generic (PLEG): container finished" podID="e45f9d86-932f-4539-a7f3-f302ae6dfd53" containerID="9529362764a3ed2511536857cb63ed4c4a7985320463c8b1094e3e1259992855" exitCode=0 Feb 02 13:23:50 crc kubenswrapper[4955]: I0202 13:23:50.552730 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q" event={"ID":"e45f9d86-932f-4539-a7f3-f302ae6dfd53","Type":"ContainerDied","Data":"9529362764a3ed2511536857cb63ed4c4a7985320463c8b1094e3e1259992855"} Feb 02 13:23:51 crc kubenswrapper[4955]: I0202 13:23:51.992984 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.064973 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55ckd\" (UniqueName: \"kubernetes.io/projected/e45f9d86-932f-4539-a7f3-f302ae6dfd53-kube-api-access-55ckd\") pod \"e45f9d86-932f-4539-a7f3-f302ae6dfd53\" (UID: \"e45f9d86-932f-4539-a7f3-f302ae6dfd53\") " Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.065091 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45f9d86-932f-4539-a7f3-f302ae6dfd53-repo-setup-combined-ca-bundle\") pod \"e45f9d86-932f-4539-a7f3-f302ae6dfd53\" (UID: \"e45f9d86-932f-4539-a7f3-f302ae6dfd53\") " Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.065134 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e45f9d86-932f-4539-a7f3-f302ae6dfd53-ssh-key-openstack-edpm-ipam\") pod \"e45f9d86-932f-4539-a7f3-f302ae6dfd53\" (UID: \"e45f9d86-932f-4539-a7f3-f302ae6dfd53\") " Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.065221 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e45f9d86-932f-4539-a7f3-f302ae6dfd53-inventory\") pod \"e45f9d86-932f-4539-a7f3-f302ae6dfd53\" (UID: \"e45f9d86-932f-4539-a7f3-f302ae6dfd53\") " Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.070909 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e45f9d86-932f-4539-a7f3-f302ae6dfd53-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e45f9d86-932f-4539-a7f3-f302ae6dfd53" (UID: "e45f9d86-932f-4539-a7f3-f302ae6dfd53"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.071886 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e45f9d86-932f-4539-a7f3-f302ae6dfd53-kube-api-access-55ckd" (OuterVolumeSpecName: "kube-api-access-55ckd") pod "e45f9d86-932f-4539-a7f3-f302ae6dfd53" (UID: "e45f9d86-932f-4539-a7f3-f302ae6dfd53"). InnerVolumeSpecName "kube-api-access-55ckd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.094289 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e45f9d86-932f-4539-a7f3-f302ae6dfd53-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e45f9d86-932f-4539-a7f3-f302ae6dfd53" (UID: "e45f9d86-932f-4539-a7f3-f302ae6dfd53"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.100549 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e45f9d86-932f-4539-a7f3-f302ae6dfd53-inventory" (OuterVolumeSpecName: "inventory") pod "e45f9d86-932f-4539-a7f3-f302ae6dfd53" (UID: "e45f9d86-932f-4539-a7f3-f302ae6dfd53"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.168047 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55ckd\" (UniqueName: \"kubernetes.io/projected/e45f9d86-932f-4539-a7f3-f302ae6dfd53-kube-api-access-55ckd\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.168083 4955 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45f9d86-932f-4539-a7f3-f302ae6dfd53-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.168093 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e45f9d86-932f-4539-a7f3-f302ae6dfd53-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.168103 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e45f9d86-932f-4539-a7f3-f302ae6dfd53-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.572182 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q" event={"ID":"e45f9d86-932f-4539-a7f3-f302ae6dfd53","Type":"ContainerDied","Data":"2b963104c550b84f7ccaf40f25690254d22ebe194c8426b1e580c0821818fa73"} Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.572454 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b963104c550b84f7ccaf40f25690254d22ebe194c8426b1e580c0821818fa73" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.572281 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.651188 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-drkmq"] Feb 02 13:23:52 crc kubenswrapper[4955]: E0202 13:23:52.653295 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45f9d86-932f-4539-a7f3-f302ae6dfd53" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.653332 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45f9d86-932f-4539-a7f3-f302ae6dfd53" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.654048 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="e45f9d86-932f-4539-a7f3-f302ae6dfd53" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.655088 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drkmq" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.660762 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.660800 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-65wvh" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.661457 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.661514 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.669319 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-drkmq"] Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.678946 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac761814-9187-4129-8167-eb4fda3b94d8-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drkmq\" (UID: \"ac761814-9187-4129-8167-eb4fda3b94d8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drkmq" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.679003 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac761814-9187-4129-8167-eb4fda3b94d8-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drkmq\" (UID: \"ac761814-9187-4129-8167-eb4fda3b94d8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drkmq" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.679260 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5d22\" (UniqueName: \"kubernetes.io/projected/ac761814-9187-4129-8167-eb4fda3b94d8-kube-api-access-k5d22\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drkmq\" (UID: \"ac761814-9187-4129-8167-eb4fda3b94d8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drkmq" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.781056 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac761814-9187-4129-8167-eb4fda3b94d8-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drkmq\" (UID: \"ac761814-9187-4129-8167-eb4fda3b94d8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drkmq" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.781100 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac761814-9187-4129-8167-eb4fda3b94d8-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drkmq\" (UID: \"ac761814-9187-4129-8167-eb4fda3b94d8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drkmq" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.781299 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5d22\" (UniqueName: \"kubernetes.io/projected/ac761814-9187-4129-8167-eb4fda3b94d8-kube-api-access-k5d22\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drkmq\" (UID: \"ac761814-9187-4129-8167-eb4fda3b94d8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drkmq" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.787289 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac761814-9187-4129-8167-eb4fda3b94d8-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drkmq\" (UID: \"ac761814-9187-4129-8167-eb4fda3b94d8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drkmq" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.804046 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac761814-9187-4129-8167-eb4fda3b94d8-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drkmq\" (UID: \"ac761814-9187-4129-8167-eb4fda3b94d8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drkmq" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.806248 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5d22\" (UniqueName: \"kubernetes.io/projected/ac761814-9187-4129-8167-eb4fda3b94d8-kube-api-access-k5d22\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drkmq\" (UID: \"ac761814-9187-4129-8167-eb4fda3b94d8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drkmq" Feb 02 13:23:52 crc kubenswrapper[4955]: I0202 13:23:52.979495 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drkmq" Feb 02 13:23:53 crc kubenswrapper[4955]: I0202 13:23:53.484677 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-drkmq"] Feb 02 13:23:53 crc kubenswrapper[4955]: W0202 13:23:53.485094 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac761814_9187_4129_8167_eb4fda3b94d8.slice/crio-f93a7327983ba5c4a08cc67e74312d413368fab151d60790bb0f65db1d824195 WatchSource:0}: Error finding container f93a7327983ba5c4a08cc67e74312d413368fab151d60790bb0f65db1d824195: Status 404 returned error can't find the container with id f93a7327983ba5c4a08cc67e74312d413368fab151d60790bb0f65db1d824195 Feb 02 13:23:53 crc kubenswrapper[4955]: I0202 13:23:53.581157 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drkmq" event={"ID":"ac761814-9187-4129-8167-eb4fda3b94d8","Type":"ContainerStarted","Data":"f93a7327983ba5c4a08cc67e74312d413368fab151d60790bb0f65db1d824195"} Feb 02 13:23:54 crc kubenswrapper[4955]: I0202 13:23:54.592756 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drkmq" event={"ID":"ac761814-9187-4129-8167-eb4fda3b94d8","Type":"ContainerStarted","Data":"fb5f8ad0582838dc6383a67561f7219853891eb5796355733e8ae422a41bb552"} Feb 02 13:23:54 crc kubenswrapper[4955]: I0202 13:23:54.621454 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drkmq" podStartSLOduration=2.208586399 podStartE2EDuration="2.621426751s" podCreationTimestamp="2026-02-02 13:23:52 +0000 UTC" firstStartedPulling="2026-02-02 13:23:53.489476742 +0000 UTC m=+1284.401813192" lastFinishedPulling="2026-02-02 13:23:53.902317094 +0000 UTC m=+1284.814653544" observedRunningTime="2026-02-02 13:23:54.613004366 +0000 UTC m=+1285.525340816" watchObservedRunningTime="2026-02-02 13:23:54.621426751 +0000 UTC m=+1285.533763201" Feb 02 13:23:56 crc kubenswrapper[4955]: I0202 13:23:56.609989 4955 generic.go:334] "Generic (PLEG): container finished" podID="ac761814-9187-4129-8167-eb4fda3b94d8" containerID="fb5f8ad0582838dc6383a67561f7219853891eb5796355733e8ae422a41bb552" exitCode=0 Feb 02 13:23:56 crc kubenswrapper[4955]: I0202 13:23:56.610064 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drkmq" event={"ID":"ac761814-9187-4129-8167-eb4fda3b94d8","Type":"ContainerDied","Data":"fb5f8ad0582838dc6383a67561f7219853891eb5796355733e8ae422a41bb552"} Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.151545 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drkmq" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.183086 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac761814-9187-4129-8167-eb4fda3b94d8-ssh-key-openstack-edpm-ipam\") pod \"ac761814-9187-4129-8167-eb4fda3b94d8\" (UID: \"ac761814-9187-4129-8167-eb4fda3b94d8\") " Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.183193 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5d22\" (UniqueName: \"kubernetes.io/projected/ac761814-9187-4129-8167-eb4fda3b94d8-kube-api-access-k5d22\") pod \"ac761814-9187-4129-8167-eb4fda3b94d8\" (UID: \"ac761814-9187-4129-8167-eb4fda3b94d8\") " Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.183280 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac761814-9187-4129-8167-eb4fda3b94d8-inventory\") pod \"ac761814-9187-4129-8167-eb4fda3b94d8\" (UID: \"ac761814-9187-4129-8167-eb4fda3b94d8\") " Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.193864 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac761814-9187-4129-8167-eb4fda3b94d8-kube-api-access-k5d22" (OuterVolumeSpecName: "kube-api-access-k5d22") pod "ac761814-9187-4129-8167-eb4fda3b94d8" (UID: "ac761814-9187-4129-8167-eb4fda3b94d8"). InnerVolumeSpecName "kube-api-access-k5d22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.218069 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac761814-9187-4129-8167-eb4fda3b94d8-inventory" (OuterVolumeSpecName: "inventory") pod "ac761814-9187-4129-8167-eb4fda3b94d8" (UID: "ac761814-9187-4129-8167-eb4fda3b94d8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.221739 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac761814-9187-4129-8167-eb4fda3b94d8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ac761814-9187-4129-8167-eb4fda3b94d8" (UID: "ac761814-9187-4129-8167-eb4fda3b94d8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.284858 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac761814-9187-4129-8167-eb4fda3b94d8-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.284895 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac761814-9187-4129-8167-eb4fda3b94d8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.284907 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5d22\" (UniqueName: \"kubernetes.io/projected/ac761814-9187-4129-8167-eb4fda3b94d8-kube-api-access-k5d22\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.630570 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drkmq" event={"ID":"ac761814-9187-4129-8167-eb4fda3b94d8","Type":"ContainerDied","Data":"f93a7327983ba5c4a08cc67e74312d413368fab151d60790bb0f65db1d824195"} Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.630609 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drkmq" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.630618 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f93a7327983ba5c4a08cc67e74312d413368fab151d60790bb0f65db1d824195" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.694705 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4"] Feb 02 13:23:58 crc kubenswrapper[4955]: E0202 13:23:58.695234 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac761814-9187-4129-8167-eb4fda3b94d8" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.695257 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac761814-9187-4129-8167-eb4fda3b94d8" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.695483 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac761814-9187-4129-8167-eb4fda3b94d8" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.696270 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.698315 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-65wvh" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.698315 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.699214 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.699234 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.706571 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4"] Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.793035 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d101010-cfe5-49b0-b956-df76cc70abfc-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4\" (UID: \"9d101010-cfe5-49b0-b956-df76cc70abfc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.793431 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d101010-cfe5-49b0-b956-df76cc70abfc-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4\" (UID: \"9d101010-cfe5-49b0-b956-df76cc70abfc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.793547 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c79t\" (UniqueName: \"kubernetes.io/projected/9d101010-cfe5-49b0-b956-df76cc70abfc-kube-api-access-8c79t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4\" (UID: \"9d101010-cfe5-49b0-b956-df76cc70abfc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.793745 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d101010-cfe5-49b0-b956-df76cc70abfc-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4\" (UID: \"9d101010-cfe5-49b0-b956-df76cc70abfc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.895760 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d101010-cfe5-49b0-b956-df76cc70abfc-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4\" (UID: \"9d101010-cfe5-49b0-b956-df76cc70abfc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.896516 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d101010-cfe5-49b0-b956-df76cc70abfc-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4\" (UID: \"9d101010-cfe5-49b0-b956-df76cc70abfc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.896543 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c79t\" (UniqueName: \"kubernetes.io/projected/9d101010-cfe5-49b0-b956-df76cc70abfc-kube-api-access-8c79t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4\" (UID: \"9d101010-cfe5-49b0-b956-df76cc70abfc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.896587 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d101010-cfe5-49b0-b956-df76cc70abfc-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4\" (UID: \"9d101010-cfe5-49b0-b956-df76cc70abfc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.900028 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d101010-cfe5-49b0-b956-df76cc70abfc-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4\" (UID: \"9d101010-cfe5-49b0-b956-df76cc70abfc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.900817 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d101010-cfe5-49b0-b956-df76cc70abfc-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4\" (UID: \"9d101010-cfe5-49b0-b956-df76cc70abfc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.901471 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d101010-cfe5-49b0-b956-df76cc70abfc-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4\" (UID: \"9d101010-cfe5-49b0-b956-df76cc70abfc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4" Feb 02 13:23:58 crc kubenswrapper[4955]: I0202 13:23:58.913764 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c79t\" (UniqueName: \"kubernetes.io/projected/9d101010-cfe5-49b0-b956-df76cc70abfc-kube-api-access-8c79t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4\" (UID: \"9d101010-cfe5-49b0-b956-df76cc70abfc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4" Feb 02 13:23:59 crc kubenswrapper[4955]: I0202 13:23:59.012463 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4" Feb 02 13:23:59 crc kubenswrapper[4955]: I0202 13:23:59.509974 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4"] Feb 02 13:23:59 crc kubenswrapper[4955]: I0202 13:23:59.639952 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4" event={"ID":"9d101010-cfe5-49b0-b956-df76cc70abfc","Type":"ContainerStarted","Data":"98410854372a7e5772b107e6f7ec5235bf29c385cdf1ed7b9c74e9807b161c83"} Feb 02 13:24:00 crc kubenswrapper[4955]: I0202 13:24:00.650352 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4" event={"ID":"9d101010-cfe5-49b0-b956-df76cc70abfc","Type":"ContainerStarted","Data":"5a5e21f91611616fde655f38aeb0fbe32b83b9a885f7494970fb3bb0ea583ba6"} Feb 02 13:24:00 crc kubenswrapper[4955]: I0202 13:24:00.668273 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4" podStartSLOduration=2.24787858 podStartE2EDuration="2.668253555s" podCreationTimestamp="2026-02-02 13:23:58 +0000 UTC" firstStartedPulling="2026-02-02 13:23:59.514659942 +0000 UTC m=+1290.426996392" lastFinishedPulling="2026-02-02 13:23:59.935034917 +0000 UTC m=+1290.847371367" observedRunningTime="2026-02-02 13:24:00.663969781 +0000 UTC m=+1291.576306251" watchObservedRunningTime="2026-02-02 13:24:00.668253555 +0000 UTC m=+1291.580590005" Feb 02 13:24:03 crc kubenswrapper[4955]: I0202 13:24:03.017438 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:24:03 crc kubenswrapper[4955]: I0202 13:24:03.019002 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:24:33 crc kubenswrapper[4955]: I0202 13:24:33.016968 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:24:33 crc kubenswrapper[4955]: I0202 13:24:33.017546 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:24:33 crc kubenswrapper[4955]: I0202 13:24:33.017646 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:24:33 crc kubenswrapper[4955]: I0202 13:24:33.018437 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"de8f7b51852eedd7c330a4f405023f03d69b18c14dc6e890327bc3a4eab66f6a"} pod="openshift-machine-config-operator/machine-config-daemon-6l62h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:24:33 crc kubenswrapper[4955]: I0202 13:24:33.018504 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" containerID="cri-o://de8f7b51852eedd7c330a4f405023f03d69b18c14dc6e890327bc3a4eab66f6a" gracePeriod=600 Feb 02 13:24:33 crc kubenswrapper[4955]: I0202 13:24:33.968586 4955 generic.go:334] "Generic (PLEG): container finished" podID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerID="de8f7b51852eedd7c330a4f405023f03d69b18c14dc6e890327bc3a4eab66f6a" exitCode=0 Feb 02 13:24:33 crc kubenswrapper[4955]: I0202 13:24:33.968696 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerDied","Data":"de8f7b51852eedd7c330a4f405023f03d69b18c14dc6e890327bc3a4eab66f6a"} Feb 02 13:24:33 crc kubenswrapper[4955]: I0202 13:24:33.969247 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerStarted","Data":"78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775"} Feb 02 13:24:33 crc kubenswrapper[4955]: I0202 13:24:33.969271 4955 scope.go:117] "RemoveContainer" containerID="602bc594f34404ff8d3d47bc3c3720ccccc87bdb99931ff5e26638726c7febe5" Feb 02 13:24:37 crc kubenswrapper[4955]: I0202 13:24:37.282052 4955 scope.go:117] "RemoveContainer" containerID="968da7fd198b59ffa6572be639223f876099280314ee40d775d4f1fe9de0b33d" Feb 02 13:24:37 crc kubenswrapper[4955]: I0202 13:24:37.305582 4955 scope.go:117] "RemoveContainer" containerID="c3d07a5f42ffb4a43cc7da94c1c0187ca4894c28d2e58b358a40156e31e24768" Feb 02 13:24:37 crc kubenswrapper[4955]: I0202 13:24:37.375241 4955 scope.go:117] "RemoveContainer" containerID="273750ce0f2b9342b0e12266a9717372ac4156f05b29bbf3c4dcc1d00e8f0a64" Feb 02 13:24:37 crc kubenswrapper[4955]: I0202 13:24:37.401651 4955 scope.go:117] "RemoveContainer" containerID="3c37174880b2aed37d6b3fa6e4a631044a5de1bb847112007526bd7b0e6412dd" Feb 02 13:26:33 crc kubenswrapper[4955]: I0202 13:26:33.016376 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:26:33 crc kubenswrapper[4955]: I0202 13:26:33.017062 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:26:58 crc kubenswrapper[4955]: I0202 13:26:58.258177 4955 generic.go:334] "Generic (PLEG): container finished" podID="9d101010-cfe5-49b0-b956-df76cc70abfc" containerID="5a5e21f91611616fde655f38aeb0fbe32b83b9a885f7494970fb3bb0ea583ba6" exitCode=0 Feb 02 13:26:58 crc kubenswrapper[4955]: I0202 13:26:58.258252 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4" event={"ID":"9d101010-cfe5-49b0-b956-df76cc70abfc","Type":"ContainerDied","Data":"5a5e21f91611616fde655f38aeb0fbe32b83b9a885f7494970fb3bb0ea583ba6"} Feb 02 13:26:59 crc kubenswrapper[4955]: I0202 13:26:59.663799 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4" Feb 02 13:26:59 crc kubenswrapper[4955]: I0202 13:26:59.795252 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d101010-cfe5-49b0-b956-df76cc70abfc-inventory\") pod \"9d101010-cfe5-49b0-b956-df76cc70abfc\" (UID: \"9d101010-cfe5-49b0-b956-df76cc70abfc\") " Feb 02 13:26:59 crc kubenswrapper[4955]: I0202 13:26:59.795501 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d101010-cfe5-49b0-b956-df76cc70abfc-bootstrap-combined-ca-bundle\") pod \"9d101010-cfe5-49b0-b956-df76cc70abfc\" (UID: \"9d101010-cfe5-49b0-b956-df76cc70abfc\") " Feb 02 13:26:59 crc kubenswrapper[4955]: I0202 13:26:59.795544 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d101010-cfe5-49b0-b956-df76cc70abfc-ssh-key-openstack-edpm-ipam\") pod \"9d101010-cfe5-49b0-b956-df76cc70abfc\" (UID: \"9d101010-cfe5-49b0-b956-df76cc70abfc\") " Feb 02 13:26:59 crc kubenswrapper[4955]: I0202 13:26:59.795605 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c79t\" (UniqueName: \"kubernetes.io/projected/9d101010-cfe5-49b0-b956-df76cc70abfc-kube-api-access-8c79t\") pod \"9d101010-cfe5-49b0-b956-df76cc70abfc\" (UID: \"9d101010-cfe5-49b0-b956-df76cc70abfc\") " Feb 02 13:26:59 crc kubenswrapper[4955]: I0202 13:26:59.802383 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d101010-cfe5-49b0-b956-df76cc70abfc-kube-api-access-8c79t" (OuterVolumeSpecName: "kube-api-access-8c79t") pod "9d101010-cfe5-49b0-b956-df76cc70abfc" (UID: "9d101010-cfe5-49b0-b956-df76cc70abfc"). InnerVolumeSpecName "kube-api-access-8c79t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:26:59 crc kubenswrapper[4955]: I0202 13:26:59.802602 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d101010-cfe5-49b0-b956-df76cc70abfc-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "9d101010-cfe5-49b0-b956-df76cc70abfc" (UID: "9d101010-cfe5-49b0-b956-df76cc70abfc"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:59 crc kubenswrapper[4955]: I0202 13:26:59.825888 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d101010-cfe5-49b0-b956-df76cc70abfc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9d101010-cfe5-49b0-b956-df76cc70abfc" (UID: "9d101010-cfe5-49b0-b956-df76cc70abfc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:59 crc kubenswrapper[4955]: I0202 13:26:59.830683 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d101010-cfe5-49b0-b956-df76cc70abfc-inventory" (OuterVolumeSpecName: "inventory") pod "9d101010-cfe5-49b0-b956-df76cc70abfc" (UID: "9d101010-cfe5-49b0-b956-df76cc70abfc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:59 crc kubenswrapper[4955]: I0202 13:26:59.897464 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d101010-cfe5-49b0-b956-df76cc70abfc-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:59 crc kubenswrapper[4955]: I0202 13:26:59.897545 4955 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d101010-cfe5-49b0-b956-df76cc70abfc-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:59 crc kubenswrapper[4955]: I0202 13:26:59.897885 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d101010-cfe5-49b0-b956-df76cc70abfc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:59 crc kubenswrapper[4955]: I0202 13:26:59.897916 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c79t\" (UniqueName: \"kubernetes.io/projected/9d101010-cfe5-49b0-b956-df76cc70abfc-kube-api-access-8c79t\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:00 crc kubenswrapper[4955]: I0202 13:27:00.279831 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4" event={"ID":"9d101010-cfe5-49b0-b956-df76cc70abfc","Type":"ContainerDied","Data":"98410854372a7e5772b107e6f7ec5235bf29c385cdf1ed7b9c74e9807b161c83"} Feb 02 13:27:00 crc kubenswrapper[4955]: I0202 13:27:00.280166 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98410854372a7e5772b107e6f7ec5235bf29c385cdf1ed7b9c74e9807b161c83" Feb 02 13:27:00 crc kubenswrapper[4955]: I0202 13:27:00.279911 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4" Feb 02 13:27:00 crc kubenswrapper[4955]: I0202 13:27:00.364315 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq"] Feb 02 13:27:00 crc kubenswrapper[4955]: E0202 13:27:00.364750 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d101010-cfe5-49b0-b956-df76cc70abfc" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 13:27:00 crc kubenswrapper[4955]: I0202 13:27:00.364770 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d101010-cfe5-49b0-b956-df76cc70abfc" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 13:27:00 crc kubenswrapper[4955]: I0202 13:27:00.364927 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d101010-cfe5-49b0-b956-df76cc70abfc" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 13:27:00 crc kubenswrapper[4955]: I0202 13:27:00.365520 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq" Feb 02 13:27:00 crc kubenswrapper[4955]: I0202 13:27:00.370732 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-65wvh" Feb 02 13:27:00 crc kubenswrapper[4955]: I0202 13:27:00.370769 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 13:27:00 crc kubenswrapper[4955]: I0202 13:27:00.370812 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 13:27:00 crc kubenswrapper[4955]: I0202 13:27:00.371080 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 13:27:00 crc kubenswrapper[4955]: I0202 13:27:00.380742 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq"] Feb 02 13:27:00 crc kubenswrapper[4955]: I0202 13:27:00.509620 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa5c1aeb-8726-4269-89d0-fe07ca5c6c29-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq\" (UID: \"fa5c1aeb-8726-4269-89d0-fe07ca5c6c29\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq" Feb 02 13:27:00 crc kubenswrapper[4955]: I0202 13:27:00.509735 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s567x\" (UniqueName: \"kubernetes.io/projected/fa5c1aeb-8726-4269-89d0-fe07ca5c6c29-kube-api-access-s567x\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq\" (UID: \"fa5c1aeb-8726-4269-89d0-fe07ca5c6c29\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq" Feb 02 13:27:00 crc kubenswrapper[4955]: I0202 13:27:00.509797 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa5c1aeb-8726-4269-89d0-fe07ca5c6c29-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq\" (UID: \"fa5c1aeb-8726-4269-89d0-fe07ca5c6c29\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq" Feb 02 13:27:00 crc kubenswrapper[4955]: I0202 13:27:00.612033 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s567x\" (UniqueName: \"kubernetes.io/projected/fa5c1aeb-8726-4269-89d0-fe07ca5c6c29-kube-api-access-s567x\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq\" (UID: \"fa5c1aeb-8726-4269-89d0-fe07ca5c6c29\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq" Feb 02 13:27:00 crc kubenswrapper[4955]: I0202 13:27:00.612148 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa5c1aeb-8726-4269-89d0-fe07ca5c6c29-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq\" (UID: \"fa5c1aeb-8726-4269-89d0-fe07ca5c6c29\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq" Feb 02 13:27:00 crc kubenswrapper[4955]: I0202 13:27:00.612316 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa5c1aeb-8726-4269-89d0-fe07ca5c6c29-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq\" (UID: \"fa5c1aeb-8726-4269-89d0-fe07ca5c6c29\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq" Feb 02 13:27:00 crc kubenswrapper[4955]: I0202 13:27:00.617291 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa5c1aeb-8726-4269-89d0-fe07ca5c6c29-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq\" (UID: \"fa5c1aeb-8726-4269-89d0-fe07ca5c6c29\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq" Feb 02 13:27:00 crc kubenswrapper[4955]: I0202 13:27:00.617313 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa5c1aeb-8726-4269-89d0-fe07ca5c6c29-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq\" (UID: \"fa5c1aeb-8726-4269-89d0-fe07ca5c6c29\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq" Feb 02 13:27:00 crc kubenswrapper[4955]: I0202 13:27:00.632155 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s567x\" (UniqueName: \"kubernetes.io/projected/fa5c1aeb-8726-4269-89d0-fe07ca5c6c29-kube-api-access-s567x\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq\" (UID: \"fa5c1aeb-8726-4269-89d0-fe07ca5c6c29\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq" Feb 02 13:27:00 crc kubenswrapper[4955]: I0202 13:27:00.690404 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq" Feb 02 13:27:01 crc kubenswrapper[4955]: I0202 13:27:01.297303 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq"] Feb 02 13:27:01 crc kubenswrapper[4955]: I0202 13:27:01.307748 4955 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 13:27:02 crc kubenswrapper[4955]: I0202 13:27:02.313930 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq" event={"ID":"fa5c1aeb-8726-4269-89d0-fe07ca5c6c29","Type":"ContainerStarted","Data":"8f739a51d02ea71895ec64c82c6b746480a5e878dec112a88e6f9497b6980030"} Feb 02 13:27:02 crc kubenswrapper[4955]: I0202 13:27:02.314268 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq" event={"ID":"fa5c1aeb-8726-4269-89d0-fe07ca5c6c29","Type":"ContainerStarted","Data":"8a5da5a92a741e4f0f55e49a8504793f0a16ef28c2780ea918ac0a8981341dcb"} Feb 02 13:27:02 crc kubenswrapper[4955]: I0202 13:27:02.331901 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq" podStartSLOduration=1.924552314 podStartE2EDuration="2.331881892s" podCreationTimestamp="2026-02-02 13:27:00 +0000 UTC" firstStartedPulling="2026-02-02 13:27:01.307490496 +0000 UTC m=+1472.219826946" lastFinishedPulling="2026-02-02 13:27:01.714820074 +0000 UTC m=+1472.627156524" observedRunningTime="2026-02-02 13:27:02.330704593 +0000 UTC m=+1473.243041043" watchObservedRunningTime="2026-02-02 13:27:02.331881892 +0000 UTC m=+1473.244218342" Feb 02 13:27:03 crc kubenswrapper[4955]: I0202 13:27:03.016364 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:27:03 crc kubenswrapper[4955]: I0202 13:27:03.016704 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:27:33 crc kubenswrapper[4955]: I0202 13:27:33.016869 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:27:33 crc kubenswrapper[4955]: I0202 13:27:33.017830 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:27:33 crc kubenswrapper[4955]: I0202 13:27:33.017912 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:27:33 crc kubenswrapper[4955]: I0202 13:27:33.019177 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775"} pod="openshift-machine-config-operator/machine-config-daemon-6l62h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:27:33 crc kubenswrapper[4955]: I0202 13:27:33.019237 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" containerID="cri-o://78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" gracePeriod=600 Feb 02 13:27:33 crc kubenswrapper[4955]: E0202 13:27:33.158552 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:27:33 crc kubenswrapper[4955]: I0202 13:27:33.587172 4955 generic.go:334] "Generic (PLEG): container finished" podID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" exitCode=0 Feb 02 13:27:33 crc kubenswrapper[4955]: I0202 13:27:33.587227 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerDied","Data":"78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775"} Feb 02 13:27:33 crc kubenswrapper[4955]: I0202 13:27:33.587265 4955 scope.go:117] "RemoveContainer" containerID="de8f7b51852eedd7c330a4f405023f03d69b18c14dc6e890327bc3a4eab66f6a" Feb 02 13:27:33 crc kubenswrapper[4955]: I0202 13:27:33.588461 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:27:33 crc kubenswrapper[4955]: E0202 13:27:33.589037 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:27:37 crc kubenswrapper[4955]: I0202 13:27:37.559426 4955 scope.go:117] "RemoveContainer" containerID="84b964b59b521654a67042365d8fb861f4908ffe5a2df45c9b53b735ba4a8ab6" Feb 02 13:27:37 crc kubenswrapper[4955]: I0202 13:27:37.598034 4955 scope.go:117] "RemoveContainer" containerID="2dd7a1e5f695263f63a5b85a8c5eafa2426c2d5c75bd7c1bf694bb3a925c9672" Feb 02 13:27:45 crc kubenswrapper[4955]: I0202 13:27:45.041497 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-9mhp4"] Feb 02 13:27:45 crc kubenswrapper[4955]: I0202 13:27:45.052781 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3703-account-create-update-7pxx8"] Feb 02 13:27:45 crc kubenswrapper[4955]: I0202 13:27:45.062856 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-vrgtc"] Feb 02 13:27:45 crc kubenswrapper[4955]: I0202 13:27:45.072717 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-9mhp4"] Feb 02 13:27:45 crc kubenswrapper[4955]: I0202 13:27:45.082496 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3703-account-create-update-7pxx8"] Feb 02 13:27:45 crc kubenswrapper[4955]: I0202 13:27:45.091412 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-vrgtc"] Feb 02 13:27:45 crc kubenswrapper[4955]: I0202 13:27:45.716442 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:27:45 crc kubenswrapper[4955]: E0202 13:27:45.717190 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:27:45 crc kubenswrapper[4955]: I0202 13:27:45.729719 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="416b4256-31d6-4455-a31e-62c0a1f3d5fe" path="/var/lib/kubelet/pods/416b4256-31d6-4455-a31e-62c0a1f3d5fe/volumes" Feb 02 13:27:45 crc kubenswrapper[4955]: I0202 13:27:45.730729 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3b79866-94e2-413c-b444-4af683c5095e" path="/var/lib/kubelet/pods/b3b79866-94e2-413c-b444-4af683c5095e/volumes" Feb 02 13:27:45 crc kubenswrapper[4955]: I0202 13:27:45.731490 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4222fa7-e71f-4d91-9e0d-ef369046f6a0" path="/var/lib/kubelet/pods/d4222fa7-e71f-4d91-9e0d-ef369046f6a0/volumes" Feb 02 13:27:46 crc kubenswrapper[4955]: I0202 13:27:46.035996 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ef11-account-create-update-dvm84"] Feb 02 13:27:46 crc kubenswrapper[4955]: I0202 13:27:46.049781 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-4e0e-account-create-update-qks52"] Feb 02 13:27:46 crc kubenswrapper[4955]: I0202 13:27:46.061422 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-bx69p"] Feb 02 13:27:46 crc kubenswrapper[4955]: I0202 13:27:46.071247 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-bx69p"] Feb 02 13:27:46 crc kubenswrapper[4955]: I0202 13:27:46.079299 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-ef11-account-create-update-dvm84"] Feb 02 13:27:46 crc kubenswrapper[4955]: I0202 13:27:46.087632 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-4e0e-account-create-update-qks52"] Feb 02 13:27:47 crc kubenswrapper[4955]: I0202 13:27:47.730945 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e7279b9-2901-426c-a100-7390d81ae95b" path="/var/lib/kubelet/pods/6e7279b9-2901-426c-a100-7390d81ae95b/volumes" Feb 02 13:27:47 crc kubenswrapper[4955]: I0202 13:27:47.732262 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52" path="/var/lib/kubelet/pods/abf14f31-bd26-4fb3-8cdb-b3a7cdfd0c52/volumes" Feb 02 13:27:47 crc kubenswrapper[4955]: I0202 13:27:47.733065 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f73bc6-3700-4cbe-9e5a-7a95c596c039" path="/var/lib/kubelet/pods/c4f73bc6-3700-4cbe-9e5a-7a95c596c039/volumes" Feb 02 13:27:55 crc kubenswrapper[4955]: I0202 13:27:55.023737 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q8tmd"] Feb 02 13:27:55 crc kubenswrapper[4955]: I0202 13:27:55.027150 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8tmd" Feb 02 13:27:55 crc kubenswrapper[4955]: I0202 13:27:55.039510 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8tmd"] Feb 02 13:27:55 crc kubenswrapper[4955]: I0202 13:27:55.094133 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0021ff13-a158-439f-a8e1-e181d91c3cd7-utilities\") pod \"redhat-marketplace-q8tmd\" (UID: \"0021ff13-a158-439f-a8e1-e181d91c3cd7\") " pod="openshift-marketplace/redhat-marketplace-q8tmd" Feb 02 13:27:55 crc kubenswrapper[4955]: I0202 13:27:55.094311 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr4r5\" (UniqueName: \"kubernetes.io/projected/0021ff13-a158-439f-a8e1-e181d91c3cd7-kube-api-access-lr4r5\") pod \"redhat-marketplace-q8tmd\" (UID: \"0021ff13-a158-439f-a8e1-e181d91c3cd7\") " pod="openshift-marketplace/redhat-marketplace-q8tmd" Feb 02 13:27:55 crc kubenswrapper[4955]: I0202 13:27:55.094591 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0021ff13-a158-439f-a8e1-e181d91c3cd7-catalog-content\") pod \"redhat-marketplace-q8tmd\" (UID: \"0021ff13-a158-439f-a8e1-e181d91c3cd7\") " pod="openshift-marketplace/redhat-marketplace-q8tmd" Feb 02 13:27:55 crc kubenswrapper[4955]: I0202 13:27:55.196222 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0021ff13-a158-439f-a8e1-e181d91c3cd7-catalog-content\") pod \"redhat-marketplace-q8tmd\" (UID: \"0021ff13-a158-439f-a8e1-e181d91c3cd7\") " pod="openshift-marketplace/redhat-marketplace-q8tmd" Feb 02 13:27:55 crc kubenswrapper[4955]: I0202 13:27:55.196374 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0021ff13-a158-439f-a8e1-e181d91c3cd7-utilities\") pod \"redhat-marketplace-q8tmd\" (UID: \"0021ff13-a158-439f-a8e1-e181d91c3cd7\") " pod="openshift-marketplace/redhat-marketplace-q8tmd" Feb 02 13:27:55 crc kubenswrapper[4955]: I0202 13:27:55.196419 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr4r5\" (UniqueName: \"kubernetes.io/projected/0021ff13-a158-439f-a8e1-e181d91c3cd7-kube-api-access-lr4r5\") pod \"redhat-marketplace-q8tmd\" (UID: \"0021ff13-a158-439f-a8e1-e181d91c3cd7\") " pod="openshift-marketplace/redhat-marketplace-q8tmd" Feb 02 13:27:55 crc kubenswrapper[4955]: I0202 13:27:55.196833 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0021ff13-a158-439f-a8e1-e181d91c3cd7-utilities\") pod \"redhat-marketplace-q8tmd\" (UID: \"0021ff13-a158-439f-a8e1-e181d91c3cd7\") " pod="openshift-marketplace/redhat-marketplace-q8tmd" Feb 02 13:27:55 crc kubenswrapper[4955]: I0202 13:27:55.197075 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0021ff13-a158-439f-a8e1-e181d91c3cd7-catalog-content\") pod \"redhat-marketplace-q8tmd\" (UID: \"0021ff13-a158-439f-a8e1-e181d91c3cd7\") " pod="openshift-marketplace/redhat-marketplace-q8tmd" Feb 02 13:27:55 crc kubenswrapper[4955]: I0202 13:27:55.216475 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr4r5\" (UniqueName: \"kubernetes.io/projected/0021ff13-a158-439f-a8e1-e181d91c3cd7-kube-api-access-lr4r5\") pod \"redhat-marketplace-q8tmd\" (UID: \"0021ff13-a158-439f-a8e1-e181d91c3cd7\") " pod="openshift-marketplace/redhat-marketplace-q8tmd" Feb 02 13:27:55 crc kubenswrapper[4955]: I0202 13:27:55.400571 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8tmd" Feb 02 13:27:56 crc kubenswrapper[4955]: I0202 13:27:56.018416 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8tmd"] Feb 02 13:27:56 crc kubenswrapper[4955]: W0202 13:27:56.022727 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0021ff13_a158_439f_a8e1_e181d91c3cd7.slice/crio-6be9ae9ed18b55527cfba53991395e3786d5c9c4d0010ae0ba5143102f022e8a WatchSource:0}: Error finding container 6be9ae9ed18b55527cfba53991395e3786d5c9c4d0010ae0ba5143102f022e8a: Status 404 returned error can't find the container with id 6be9ae9ed18b55527cfba53991395e3786d5c9c4d0010ae0ba5143102f022e8a Feb 02 13:27:56 crc kubenswrapper[4955]: I0202 13:27:56.811483 4955 generic.go:334] "Generic (PLEG): container finished" podID="0021ff13-a158-439f-a8e1-e181d91c3cd7" containerID="9720aa84ca6f2a3b5bcebb2feee0f2eb88d36a8214644d4943b8693800cf9284" exitCode=0 Feb 02 13:27:56 crc kubenswrapper[4955]: I0202 13:27:56.811537 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8tmd" event={"ID":"0021ff13-a158-439f-a8e1-e181d91c3cd7","Type":"ContainerDied","Data":"9720aa84ca6f2a3b5bcebb2feee0f2eb88d36a8214644d4943b8693800cf9284"} Feb 02 13:27:56 crc kubenswrapper[4955]: I0202 13:27:56.811836 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8tmd" event={"ID":"0021ff13-a158-439f-a8e1-e181d91c3cd7","Type":"ContainerStarted","Data":"6be9ae9ed18b55527cfba53991395e3786d5c9c4d0010ae0ba5143102f022e8a"} Feb 02 13:27:57 crc kubenswrapper[4955]: I0202 13:27:57.716185 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:27:57 crc kubenswrapper[4955]: E0202 13:27:57.716734 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:27:57 crc kubenswrapper[4955]: I0202 13:27:57.830491 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8tmd" event={"ID":"0021ff13-a158-439f-a8e1-e181d91c3cd7","Type":"ContainerStarted","Data":"937202339fb74dae3b91115068e1d789d80c1e373e19bce37a995fb52c8320e9"} Feb 02 13:27:58 crc kubenswrapper[4955]: I0202 13:27:58.839346 4955 generic.go:334] "Generic (PLEG): container finished" podID="0021ff13-a158-439f-a8e1-e181d91c3cd7" containerID="937202339fb74dae3b91115068e1d789d80c1e373e19bce37a995fb52c8320e9" exitCode=0 Feb 02 13:27:58 crc kubenswrapper[4955]: I0202 13:27:58.839396 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8tmd" event={"ID":"0021ff13-a158-439f-a8e1-e181d91c3cd7","Type":"ContainerDied","Data":"937202339fb74dae3b91115068e1d789d80c1e373e19bce37a995fb52c8320e9"} Feb 02 13:27:59 crc kubenswrapper[4955]: I0202 13:27:59.852938 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8tmd" event={"ID":"0021ff13-a158-439f-a8e1-e181d91c3cd7","Type":"ContainerStarted","Data":"036bf2aa9f216c5de766b1f677bbf21167fd7d1dc67560aeb862c508262233e1"} Feb 02 13:27:59 crc kubenswrapper[4955]: I0202 13:27:59.870955 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q8tmd" podStartSLOduration=2.359003354 podStartE2EDuration="4.870934534s" podCreationTimestamp="2026-02-02 13:27:55 +0000 UTC" firstStartedPulling="2026-02-02 13:27:56.815984148 +0000 UTC m=+1527.728320598" lastFinishedPulling="2026-02-02 13:27:59.327915328 +0000 UTC m=+1530.240251778" observedRunningTime="2026-02-02 13:27:59.867393666 +0000 UTC m=+1530.779730126" watchObservedRunningTime="2026-02-02 13:27:59.870934534 +0000 UTC m=+1530.783270994" Feb 02 13:28:05 crc kubenswrapper[4955]: I0202 13:28:05.401099 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q8tmd" Feb 02 13:28:05 crc kubenswrapper[4955]: I0202 13:28:05.401700 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q8tmd" Feb 02 13:28:05 crc kubenswrapper[4955]: I0202 13:28:05.457911 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q8tmd" Feb 02 13:28:05 crc kubenswrapper[4955]: I0202 13:28:05.942280 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q8tmd" Feb 02 13:28:05 crc kubenswrapper[4955]: I0202 13:28:05.992801 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8tmd"] Feb 02 13:28:07 crc kubenswrapper[4955]: I0202 13:28:07.913216 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q8tmd" podUID="0021ff13-a158-439f-a8e1-e181d91c3cd7" containerName="registry-server" containerID="cri-o://036bf2aa9f216c5de766b1f677bbf21167fd7d1dc67560aeb862c508262233e1" gracePeriod=2 Feb 02 13:28:08 crc kubenswrapper[4955]: I0202 13:28:08.053391 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-82h78"] Feb 02 13:28:08 crc kubenswrapper[4955]: I0202 13:28:08.063985 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-82h78"] Feb 02 13:28:08 crc kubenswrapper[4955]: I0202 13:28:08.401683 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8tmd" Feb 02 13:28:08 crc kubenswrapper[4955]: I0202 13:28:08.541192 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr4r5\" (UniqueName: \"kubernetes.io/projected/0021ff13-a158-439f-a8e1-e181d91c3cd7-kube-api-access-lr4r5\") pod \"0021ff13-a158-439f-a8e1-e181d91c3cd7\" (UID: \"0021ff13-a158-439f-a8e1-e181d91c3cd7\") " Feb 02 13:28:08 crc kubenswrapper[4955]: I0202 13:28:08.541337 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0021ff13-a158-439f-a8e1-e181d91c3cd7-catalog-content\") pod \"0021ff13-a158-439f-a8e1-e181d91c3cd7\" (UID: \"0021ff13-a158-439f-a8e1-e181d91c3cd7\") " Feb 02 13:28:08 crc kubenswrapper[4955]: I0202 13:28:08.541409 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0021ff13-a158-439f-a8e1-e181d91c3cd7-utilities\") pod \"0021ff13-a158-439f-a8e1-e181d91c3cd7\" (UID: \"0021ff13-a158-439f-a8e1-e181d91c3cd7\") " Feb 02 13:28:08 crc kubenswrapper[4955]: I0202 13:28:08.542478 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0021ff13-a158-439f-a8e1-e181d91c3cd7-utilities" (OuterVolumeSpecName: "utilities") pod "0021ff13-a158-439f-a8e1-e181d91c3cd7" (UID: "0021ff13-a158-439f-a8e1-e181d91c3cd7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:28:08 crc kubenswrapper[4955]: I0202 13:28:08.571426 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0021ff13-a158-439f-a8e1-e181d91c3cd7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0021ff13-a158-439f-a8e1-e181d91c3cd7" (UID: "0021ff13-a158-439f-a8e1-e181d91c3cd7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:28:08 crc kubenswrapper[4955]: I0202 13:28:08.587047 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0021ff13-a158-439f-a8e1-e181d91c3cd7-kube-api-access-lr4r5" (OuterVolumeSpecName: "kube-api-access-lr4r5") pod "0021ff13-a158-439f-a8e1-e181d91c3cd7" (UID: "0021ff13-a158-439f-a8e1-e181d91c3cd7"). InnerVolumeSpecName "kube-api-access-lr4r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:28:08 crc kubenswrapper[4955]: I0202 13:28:08.643985 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0021ff13-a158-439f-a8e1-e181d91c3cd7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:28:08 crc kubenswrapper[4955]: I0202 13:28:08.644312 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0021ff13-a158-439f-a8e1-e181d91c3cd7-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:28:08 crc kubenswrapper[4955]: I0202 13:28:08.644325 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr4r5\" (UniqueName: \"kubernetes.io/projected/0021ff13-a158-439f-a8e1-e181d91c3cd7-kube-api-access-lr4r5\") on node \"crc\" DevicePath \"\"" Feb 02 13:28:08 crc kubenswrapper[4955]: I0202 13:28:08.924975 4955 generic.go:334] "Generic (PLEG): container finished" podID="0021ff13-a158-439f-a8e1-e181d91c3cd7" containerID="036bf2aa9f216c5de766b1f677bbf21167fd7d1dc67560aeb862c508262233e1" exitCode=0 Feb 02 13:28:08 crc kubenswrapper[4955]: I0202 13:28:08.925027 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8tmd" Feb 02 13:28:08 crc kubenswrapper[4955]: I0202 13:28:08.925024 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8tmd" event={"ID":"0021ff13-a158-439f-a8e1-e181d91c3cd7","Type":"ContainerDied","Data":"036bf2aa9f216c5de766b1f677bbf21167fd7d1dc67560aeb862c508262233e1"} Feb 02 13:28:08 crc kubenswrapper[4955]: I0202 13:28:08.925089 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8tmd" event={"ID":"0021ff13-a158-439f-a8e1-e181d91c3cd7","Type":"ContainerDied","Data":"6be9ae9ed18b55527cfba53991395e3786d5c9c4d0010ae0ba5143102f022e8a"} Feb 02 13:28:08 crc kubenswrapper[4955]: I0202 13:28:08.925112 4955 scope.go:117] "RemoveContainer" containerID="036bf2aa9f216c5de766b1f677bbf21167fd7d1dc67560aeb862c508262233e1" Feb 02 13:28:08 crc kubenswrapper[4955]: I0202 13:28:08.954427 4955 scope.go:117] "RemoveContainer" containerID="937202339fb74dae3b91115068e1d789d80c1e373e19bce37a995fb52c8320e9" Feb 02 13:28:08 crc kubenswrapper[4955]: I0202 13:28:08.961643 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8tmd"] Feb 02 13:28:08 crc kubenswrapper[4955]: I0202 13:28:08.972974 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8tmd"] Feb 02 13:28:09 crc kubenswrapper[4955]: I0202 13:28:09.001798 4955 scope.go:117] "RemoveContainer" containerID="9720aa84ca6f2a3b5bcebb2feee0f2eb88d36a8214644d4943b8693800cf9284" Feb 02 13:28:09 crc kubenswrapper[4955]: I0202 13:28:09.038234 4955 scope.go:117] "RemoveContainer" containerID="036bf2aa9f216c5de766b1f677bbf21167fd7d1dc67560aeb862c508262233e1" Feb 02 13:28:09 crc kubenswrapper[4955]: E0202 13:28:09.038861 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"036bf2aa9f216c5de766b1f677bbf21167fd7d1dc67560aeb862c508262233e1\": container with ID starting with 036bf2aa9f216c5de766b1f677bbf21167fd7d1dc67560aeb862c508262233e1 not found: ID does not exist" containerID="036bf2aa9f216c5de766b1f677bbf21167fd7d1dc67560aeb862c508262233e1" Feb 02 13:28:09 crc kubenswrapper[4955]: I0202 13:28:09.038903 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"036bf2aa9f216c5de766b1f677bbf21167fd7d1dc67560aeb862c508262233e1"} err="failed to get container status \"036bf2aa9f216c5de766b1f677bbf21167fd7d1dc67560aeb862c508262233e1\": rpc error: code = NotFound desc = could not find container \"036bf2aa9f216c5de766b1f677bbf21167fd7d1dc67560aeb862c508262233e1\": container with ID starting with 036bf2aa9f216c5de766b1f677bbf21167fd7d1dc67560aeb862c508262233e1 not found: ID does not exist" Feb 02 13:28:09 crc kubenswrapper[4955]: I0202 13:28:09.038931 4955 scope.go:117] "RemoveContainer" containerID="937202339fb74dae3b91115068e1d789d80c1e373e19bce37a995fb52c8320e9" Feb 02 13:28:09 crc kubenswrapper[4955]: E0202 13:28:09.039271 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"937202339fb74dae3b91115068e1d789d80c1e373e19bce37a995fb52c8320e9\": container with ID starting with 937202339fb74dae3b91115068e1d789d80c1e373e19bce37a995fb52c8320e9 not found: ID does not exist" containerID="937202339fb74dae3b91115068e1d789d80c1e373e19bce37a995fb52c8320e9" Feb 02 13:28:09 crc kubenswrapper[4955]: I0202 13:28:09.039381 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"937202339fb74dae3b91115068e1d789d80c1e373e19bce37a995fb52c8320e9"} err="failed to get container status \"937202339fb74dae3b91115068e1d789d80c1e373e19bce37a995fb52c8320e9\": rpc error: code = NotFound desc = could not find container \"937202339fb74dae3b91115068e1d789d80c1e373e19bce37a995fb52c8320e9\": container with ID starting with 937202339fb74dae3b91115068e1d789d80c1e373e19bce37a995fb52c8320e9 not found: ID does not exist" Feb 02 13:28:09 crc kubenswrapper[4955]: I0202 13:28:09.039490 4955 scope.go:117] "RemoveContainer" containerID="9720aa84ca6f2a3b5bcebb2feee0f2eb88d36a8214644d4943b8693800cf9284" Feb 02 13:28:09 crc kubenswrapper[4955]: E0202 13:28:09.039942 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9720aa84ca6f2a3b5bcebb2feee0f2eb88d36a8214644d4943b8693800cf9284\": container with ID starting with 9720aa84ca6f2a3b5bcebb2feee0f2eb88d36a8214644d4943b8693800cf9284 not found: ID does not exist" containerID="9720aa84ca6f2a3b5bcebb2feee0f2eb88d36a8214644d4943b8693800cf9284" Feb 02 13:28:09 crc kubenswrapper[4955]: I0202 13:28:09.039973 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9720aa84ca6f2a3b5bcebb2feee0f2eb88d36a8214644d4943b8693800cf9284"} err="failed to get container status \"9720aa84ca6f2a3b5bcebb2feee0f2eb88d36a8214644d4943b8693800cf9284\": rpc error: code = NotFound desc = could not find container \"9720aa84ca6f2a3b5bcebb2feee0f2eb88d36a8214644d4943b8693800cf9284\": container with ID starting with 9720aa84ca6f2a3b5bcebb2feee0f2eb88d36a8214644d4943b8693800cf9284 not found: ID does not exist" Feb 02 13:28:09 crc kubenswrapper[4955]: I0202 13:28:09.728911 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0021ff13-a158-439f-a8e1-e181d91c3cd7" path="/var/lib/kubelet/pods/0021ff13-a158-439f-a8e1-e181d91c3cd7/volumes" Feb 02 13:28:09 crc kubenswrapper[4955]: I0202 13:28:09.729729 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aba3431-7e59-4d8e-9205-071948d70a8a" path="/var/lib/kubelet/pods/7aba3431-7e59-4d8e-9205-071948d70a8a/volumes" Feb 02 13:28:11 crc kubenswrapper[4955]: I0202 13:28:11.716002 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:28:11 crc kubenswrapper[4955]: E0202 13:28:11.716489 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:28:24 crc kubenswrapper[4955]: I0202 13:28:24.046545 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-gfs47"] Feb 02 13:28:24 crc kubenswrapper[4955]: I0202 13:28:24.061338 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-7474-account-create-update-4lqrk"] Feb 02 13:28:24 crc kubenswrapper[4955]: I0202 13:28:24.069623 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-g8c5d"] Feb 02 13:28:24 crc kubenswrapper[4955]: I0202 13:28:24.081521 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-gfs47"] Feb 02 13:28:24 crc kubenswrapper[4955]: I0202 13:28:24.092678 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-vhvl9"] Feb 02 13:28:24 crc kubenswrapper[4955]: I0202 13:28:24.100432 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-7474-account-create-update-4lqrk"] Feb 02 13:28:24 crc kubenswrapper[4955]: I0202 13:28:24.108122 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-g8c5d"] Feb 02 13:28:24 crc kubenswrapper[4955]: I0202 13:28:24.117185 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-nzgmz"] Feb 02 13:28:24 crc kubenswrapper[4955]: I0202 13:28:24.124946 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-600f-account-create-update-777t8"] Feb 02 13:28:24 crc kubenswrapper[4955]: I0202 13:28:24.132400 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-vhvl9"] Feb 02 13:28:24 crc kubenswrapper[4955]: I0202 13:28:24.139467 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-4dd6-account-create-update-bv6lj"] Feb 02 13:28:24 crc kubenswrapper[4955]: I0202 13:28:24.145992 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-21d7-account-create-update-l26fg"] Feb 02 13:28:24 crc kubenswrapper[4955]: I0202 13:28:24.155186 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-nzgmz"] Feb 02 13:28:24 crc kubenswrapper[4955]: I0202 13:28:24.163211 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-4dd6-account-create-update-bv6lj"] Feb 02 13:28:24 crc kubenswrapper[4955]: I0202 13:28:24.171760 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-21d7-account-create-update-l26fg"] Feb 02 13:28:24 crc kubenswrapper[4955]: I0202 13:28:24.179217 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-600f-account-create-update-777t8"] Feb 02 13:28:24 crc kubenswrapper[4955]: I0202 13:28:24.716098 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:28:24 crc kubenswrapper[4955]: E0202 13:28:24.716330 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:28:25 crc kubenswrapper[4955]: I0202 13:28:25.056316 4955 generic.go:334] "Generic (PLEG): container finished" podID="fa5c1aeb-8726-4269-89d0-fe07ca5c6c29" containerID="8f739a51d02ea71895ec64c82c6b746480a5e878dec112a88e6f9497b6980030" exitCode=0 Feb 02 13:28:25 crc kubenswrapper[4955]: I0202 13:28:25.056357 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq" event={"ID":"fa5c1aeb-8726-4269-89d0-fe07ca5c6c29","Type":"ContainerDied","Data":"8f739a51d02ea71895ec64c82c6b746480a5e878dec112a88e6f9497b6980030"} Feb 02 13:28:25 crc kubenswrapper[4955]: I0202 13:28:25.729057 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22defd53-64bc-47b9-86e8-21563ec3a37f" path="/var/lib/kubelet/pods/22defd53-64bc-47b9-86e8-21563ec3a37f/volumes" Feb 02 13:28:25 crc kubenswrapper[4955]: I0202 13:28:25.730116 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27d03337-7374-44ab-8c95-d092c42d2355" path="/var/lib/kubelet/pods/27d03337-7374-44ab-8c95-d092c42d2355/volumes" Feb 02 13:28:25 crc kubenswrapper[4955]: I0202 13:28:25.730709 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3544eea9-736e-471c-85b4-b59aab2d5533" path="/var/lib/kubelet/pods/3544eea9-736e-471c-85b4-b59aab2d5533/volumes" Feb 02 13:28:25 crc kubenswrapper[4955]: I0202 13:28:25.731313 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="765f7521-d8d5-4034-b94c-e64a698c65ae" path="/var/lib/kubelet/pods/765f7521-d8d5-4034-b94c-e64a698c65ae/volumes" Feb 02 13:28:25 crc kubenswrapper[4955]: I0202 13:28:25.731872 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a847cbe7-3090-4d64-9faf-ed4414d614ad" path="/var/lib/kubelet/pods/a847cbe7-3090-4d64-9faf-ed4414d614ad/volumes" Feb 02 13:28:25 crc kubenswrapper[4955]: I0202 13:28:25.732474 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cda69844-00d2-4981-bfbd-1d4ed05274d1" path="/var/lib/kubelet/pods/cda69844-00d2-4981-bfbd-1d4ed05274d1/volumes" Feb 02 13:28:25 crc kubenswrapper[4955]: I0202 13:28:25.733178 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dea27d86-5db9-49fd-b9bf-44176e78d3d6" path="/var/lib/kubelet/pods/dea27d86-5db9-49fd-b9bf-44176e78d3d6/volumes" Feb 02 13:28:25 crc kubenswrapper[4955]: I0202 13:28:25.733817 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe99b176-e998-4cdd-9cef-32407153cc79" path="/var/lib/kubelet/pods/fe99b176-e998-4cdd-9cef-32407153cc79/volumes" Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.479339 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kzm8g"] Feb 02 13:28:26 crc kubenswrapper[4955]: E0202 13:28:26.480084 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0021ff13-a158-439f-a8e1-e181d91c3cd7" containerName="extract-content" Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.480101 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="0021ff13-a158-439f-a8e1-e181d91c3cd7" containerName="extract-content" Feb 02 13:28:26 crc kubenswrapper[4955]: E0202 13:28:26.480113 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0021ff13-a158-439f-a8e1-e181d91c3cd7" containerName="registry-server" Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.480120 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="0021ff13-a158-439f-a8e1-e181d91c3cd7" containerName="registry-server" Feb 02 13:28:26 crc kubenswrapper[4955]: E0202 13:28:26.480132 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0021ff13-a158-439f-a8e1-e181d91c3cd7" containerName="extract-utilities" Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.480139 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="0021ff13-a158-439f-a8e1-e181d91c3cd7" containerName="extract-utilities" Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.480328 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="0021ff13-a158-439f-a8e1-e181d91c3cd7" containerName="registry-server" Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.483204 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzm8g" Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.490894 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kzm8g"] Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.515672 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq" Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.588044 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z6pq\" (UniqueName: \"kubernetes.io/projected/27884f6c-7aa5-45c0-8a12-59ea99473a3c-kube-api-access-5z6pq\") pod \"certified-operators-kzm8g\" (UID: \"27884f6c-7aa5-45c0-8a12-59ea99473a3c\") " pod="openshift-marketplace/certified-operators-kzm8g" Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.588396 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27884f6c-7aa5-45c0-8a12-59ea99473a3c-catalog-content\") pod \"certified-operators-kzm8g\" (UID: \"27884f6c-7aa5-45c0-8a12-59ea99473a3c\") " pod="openshift-marketplace/certified-operators-kzm8g" Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.588534 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27884f6c-7aa5-45c0-8a12-59ea99473a3c-utilities\") pod \"certified-operators-kzm8g\" (UID: \"27884f6c-7aa5-45c0-8a12-59ea99473a3c\") " pod="openshift-marketplace/certified-operators-kzm8g" Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.690300 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s567x\" (UniqueName: \"kubernetes.io/projected/fa5c1aeb-8726-4269-89d0-fe07ca5c6c29-kube-api-access-s567x\") pod \"fa5c1aeb-8726-4269-89d0-fe07ca5c6c29\" (UID: \"fa5c1aeb-8726-4269-89d0-fe07ca5c6c29\") " Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.690526 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa5c1aeb-8726-4269-89d0-fe07ca5c6c29-ssh-key-openstack-edpm-ipam\") pod \"fa5c1aeb-8726-4269-89d0-fe07ca5c6c29\" (UID: \"fa5c1aeb-8726-4269-89d0-fe07ca5c6c29\") " Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.690652 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa5c1aeb-8726-4269-89d0-fe07ca5c6c29-inventory\") pod \"fa5c1aeb-8726-4269-89d0-fe07ca5c6c29\" (UID: \"fa5c1aeb-8726-4269-89d0-fe07ca5c6c29\") " Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.691032 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27884f6c-7aa5-45c0-8a12-59ea99473a3c-utilities\") pod \"certified-operators-kzm8g\" (UID: \"27884f6c-7aa5-45c0-8a12-59ea99473a3c\") " pod="openshift-marketplace/certified-operators-kzm8g" Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.691200 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z6pq\" (UniqueName: \"kubernetes.io/projected/27884f6c-7aa5-45c0-8a12-59ea99473a3c-kube-api-access-5z6pq\") pod \"certified-operators-kzm8g\" (UID: \"27884f6c-7aa5-45c0-8a12-59ea99473a3c\") " pod="openshift-marketplace/certified-operators-kzm8g" Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.691289 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27884f6c-7aa5-45c0-8a12-59ea99473a3c-catalog-content\") pod \"certified-operators-kzm8g\" (UID: \"27884f6c-7aa5-45c0-8a12-59ea99473a3c\") " pod="openshift-marketplace/certified-operators-kzm8g" Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.691805 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27884f6c-7aa5-45c0-8a12-59ea99473a3c-utilities\") pod \"certified-operators-kzm8g\" (UID: \"27884f6c-7aa5-45c0-8a12-59ea99473a3c\") " pod="openshift-marketplace/certified-operators-kzm8g" Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.691830 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27884f6c-7aa5-45c0-8a12-59ea99473a3c-catalog-content\") pod \"certified-operators-kzm8g\" (UID: \"27884f6c-7aa5-45c0-8a12-59ea99473a3c\") " pod="openshift-marketplace/certified-operators-kzm8g" Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.697087 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa5c1aeb-8726-4269-89d0-fe07ca5c6c29-kube-api-access-s567x" (OuterVolumeSpecName: "kube-api-access-s567x") pod "fa5c1aeb-8726-4269-89d0-fe07ca5c6c29" (UID: "fa5c1aeb-8726-4269-89d0-fe07ca5c6c29"). InnerVolumeSpecName "kube-api-access-s567x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.708503 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z6pq\" (UniqueName: \"kubernetes.io/projected/27884f6c-7aa5-45c0-8a12-59ea99473a3c-kube-api-access-5z6pq\") pod \"certified-operators-kzm8g\" (UID: \"27884f6c-7aa5-45c0-8a12-59ea99473a3c\") " pod="openshift-marketplace/certified-operators-kzm8g" Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.723733 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa5c1aeb-8726-4269-89d0-fe07ca5c6c29-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fa5c1aeb-8726-4269-89d0-fe07ca5c6c29" (UID: "fa5c1aeb-8726-4269-89d0-fe07ca5c6c29"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.735111 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa5c1aeb-8726-4269-89d0-fe07ca5c6c29-inventory" (OuterVolumeSpecName: "inventory") pod "fa5c1aeb-8726-4269-89d0-fe07ca5c6c29" (UID: "fa5c1aeb-8726-4269-89d0-fe07ca5c6c29"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.793151 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s567x\" (UniqueName: \"kubernetes.io/projected/fa5c1aeb-8726-4269-89d0-fe07ca5c6c29-kube-api-access-s567x\") on node \"crc\" DevicePath \"\"" Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.793184 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa5c1aeb-8726-4269-89d0-fe07ca5c6c29-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.793195 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa5c1aeb-8726-4269-89d0-fe07ca5c6c29-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 13:28:26 crc kubenswrapper[4955]: I0202 13:28:26.831142 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzm8g" Feb 02 13:28:27 crc kubenswrapper[4955]: I0202 13:28:27.085850 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq" event={"ID":"fa5c1aeb-8726-4269-89d0-fe07ca5c6c29","Type":"ContainerDied","Data":"8a5da5a92a741e4f0f55e49a8504793f0a16ef28c2780ea918ac0a8981341dcb"} Feb 02 13:28:27 crc kubenswrapper[4955]: I0202 13:28:27.086406 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a5da5a92a741e4f0f55e49a8504793f0a16ef28c2780ea918ac0a8981341dcb" Feb 02 13:28:27 crc kubenswrapper[4955]: I0202 13:28:27.086477 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq" Feb 02 13:28:27 crc kubenswrapper[4955]: I0202 13:28:27.193268 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb"] Feb 02 13:28:27 crc kubenswrapper[4955]: E0202 13:28:27.193849 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa5c1aeb-8726-4269-89d0-fe07ca5c6c29" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 02 13:28:27 crc kubenswrapper[4955]: I0202 13:28:27.193875 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa5c1aeb-8726-4269-89d0-fe07ca5c6c29" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 02 13:28:27 crc kubenswrapper[4955]: I0202 13:28:27.194112 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa5c1aeb-8726-4269-89d0-fe07ca5c6c29" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 02 13:28:27 crc kubenswrapper[4955]: I0202 13:28:27.194920 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb" Feb 02 13:28:27 crc kubenswrapper[4955]: I0202 13:28:27.201302 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 13:28:27 crc kubenswrapper[4955]: I0202 13:28:27.201450 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-65wvh" Feb 02 13:28:27 crc kubenswrapper[4955]: I0202 13:28:27.201583 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 13:28:27 crc kubenswrapper[4955]: I0202 13:28:27.201797 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 13:28:27 crc kubenswrapper[4955]: I0202 13:28:27.224788 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb"] Feb 02 13:28:27 crc kubenswrapper[4955]: I0202 13:28:27.305136 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6c973ae-9b03-4511-abfc-360377684859-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb\" (UID: \"e6c973ae-9b03-4511-abfc-360377684859\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb" Feb 02 13:28:27 crc kubenswrapper[4955]: I0202 13:28:27.305336 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6c973ae-9b03-4511-abfc-360377684859-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb\" (UID: \"e6c973ae-9b03-4511-abfc-360377684859\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb" Feb 02 13:28:27 crc kubenswrapper[4955]: I0202 13:28:27.305399 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw6tk\" (UniqueName: \"kubernetes.io/projected/e6c973ae-9b03-4511-abfc-360377684859-kube-api-access-mw6tk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb\" (UID: \"e6c973ae-9b03-4511-abfc-360377684859\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb" Feb 02 13:28:27 crc kubenswrapper[4955]: I0202 13:28:27.362121 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kzm8g"] Feb 02 13:28:27 crc kubenswrapper[4955]: W0202 13:28:27.365288 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27884f6c_7aa5_45c0_8a12_59ea99473a3c.slice/crio-072bc51598550db95819c96aa64419f88393d8367be03e4bcedf041df18f8c33 WatchSource:0}: Error finding container 072bc51598550db95819c96aa64419f88393d8367be03e4bcedf041df18f8c33: Status 404 returned error can't find the container with id 072bc51598550db95819c96aa64419f88393d8367be03e4bcedf041df18f8c33 Feb 02 13:28:27 crc kubenswrapper[4955]: I0202 13:28:27.406968 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6c973ae-9b03-4511-abfc-360377684859-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb\" (UID: \"e6c973ae-9b03-4511-abfc-360377684859\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb" Feb 02 13:28:27 crc kubenswrapper[4955]: I0202 13:28:27.407118 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6c973ae-9b03-4511-abfc-360377684859-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb\" (UID: \"e6c973ae-9b03-4511-abfc-360377684859\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb" Feb 02 13:28:27 crc kubenswrapper[4955]: I0202 13:28:27.407190 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw6tk\" (UniqueName: \"kubernetes.io/projected/e6c973ae-9b03-4511-abfc-360377684859-kube-api-access-mw6tk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb\" (UID: \"e6c973ae-9b03-4511-abfc-360377684859\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb" Feb 02 13:28:27 crc kubenswrapper[4955]: I0202 13:28:27.420418 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6c973ae-9b03-4511-abfc-360377684859-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb\" (UID: \"e6c973ae-9b03-4511-abfc-360377684859\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb" Feb 02 13:28:27 crc kubenswrapper[4955]: I0202 13:28:27.420444 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6c973ae-9b03-4511-abfc-360377684859-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb\" (UID: \"e6c973ae-9b03-4511-abfc-360377684859\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb" Feb 02 13:28:27 crc kubenswrapper[4955]: I0202 13:28:27.423594 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw6tk\" (UniqueName: \"kubernetes.io/projected/e6c973ae-9b03-4511-abfc-360377684859-kube-api-access-mw6tk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb\" (UID: \"e6c973ae-9b03-4511-abfc-360377684859\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb" Feb 02 13:28:27 crc kubenswrapper[4955]: I0202 13:28:27.529040 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb" Feb 02 13:28:28 crc kubenswrapper[4955]: I0202 13:28:28.041896 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb"] Feb 02 13:28:28 crc kubenswrapper[4955]: I0202 13:28:28.095466 4955 generic.go:334] "Generic (PLEG): container finished" podID="27884f6c-7aa5-45c0-8a12-59ea99473a3c" containerID="9742ad69dc9e2639f121fb4f53e22d3b745c833202f1bc32866444a9c4d57e1d" exitCode=0 Feb 02 13:28:28 crc kubenswrapper[4955]: I0202 13:28:28.095507 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzm8g" event={"ID":"27884f6c-7aa5-45c0-8a12-59ea99473a3c","Type":"ContainerDied","Data":"9742ad69dc9e2639f121fb4f53e22d3b745c833202f1bc32866444a9c4d57e1d"} Feb 02 13:28:28 crc kubenswrapper[4955]: I0202 13:28:28.095570 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzm8g" event={"ID":"27884f6c-7aa5-45c0-8a12-59ea99473a3c","Type":"ContainerStarted","Data":"072bc51598550db95819c96aa64419f88393d8367be03e4bcedf041df18f8c33"} Feb 02 13:28:28 crc kubenswrapper[4955]: I0202 13:28:28.097333 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb" event={"ID":"e6c973ae-9b03-4511-abfc-360377684859","Type":"ContainerStarted","Data":"e78a9213054005d3e04ee815727758ec6c2c1d7940914433125d6a3a088a6a24"} Feb 02 13:28:29 crc kubenswrapper[4955]: I0202 13:28:29.036423 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-4r26c"] Feb 02 13:28:29 crc kubenswrapper[4955]: I0202 13:28:29.046290 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-6xnc6"] Feb 02 13:28:29 crc kubenswrapper[4955]: I0202 13:28:29.055925 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-6xnc6"] Feb 02 13:28:29 crc kubenswrapper[4955]: I0202 13:28:29.066839 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-4r26c"] Feb 02 13:28:29 crc kubenswrapper[4955]: I0202 13:28:29.108666 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzm8g" event={"ID":"27884f6c-7aa5-45c0-8a12-59ea99473a3c","Type":"ContainerStarted","Data":"faa2eb358210a03d210e1cc2f480d4353c2f08d8714ff7b51690fec9822b93dd"} Feb 02 13:28:29 crc kubenswrapper[4955]: I0202 13:28:29.110494 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb" event={"ID":"e6c973ae-9b03-4511-abfc-360377684859","Type":"ContainerStarted","Data":"9bac42a8f04b07ebec67af6254e63ac7bf110fb5833a5ec359a4d8ef445df9d8"} Feb 02 13:28:29 crc kubenswrapper[4955]: I0202 13:28:29.144992 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb" podStartSLOduration=1.7376910140000001 podStartE2EDuration="2.14496666s" podCreationTimestamp="2026-02-02 13:28:27 +0000 UTC" firstStartedPulling="2026-02-02 13:28:28.048063598 +0000 UTC m=+1558.960400058" lastFinishedPulling="2026-02-02 13:28:28.455339254 +0000 UTC m=+1559.367675704" observedRunningTime="2026-02-02 13:28:29.134359314 +0000 UTC m=+1560.046695764" watchObservedRunningTime="2026-02-02 13:28:29.14496666 +0000 UTC m=+1560.057303110" Feb 02 13:28:29 crc kubenswrapper[4955]: I0202 13:28:29.768936 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93af2657-6c4c-4163-aeb1-4527c3a6bf1a" path="/var/lib/kubelet/pods/93af2657-6c4c-4163-aeb1-4527c3a6bf1a/volumes" Feb 02 13:28:29 crc kubenswrapper[4955]: I0202 13:28:29.775024 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5111fc8-b31a-4644-aae9-5a89e4d5da9a" path="/var/lib/kubelet/pods/d5111fc8-b31a-4644-aae9-5a89e4d5da9a/volumes" Feb 02 13:28:31 crc kubenswrapper[4955]: I0202 13:28:31.131242 4955 generic.go:334] "Generic (PLEG): container finished" podID="27884f6c-7aa5-45c0-8a12-59ea99473a3c" containerID="faa2eb358210a03d210e1cc2f480d4353c2f08d8714ff7b51690fec9822b93dd" exitCode=0 Feb 02 13:28:31 crc kubenswrapper[4955]: I0202 13:28:31.131335 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzm8g" event={"ID":"27884f6c-7aa5-45c0-8a12-59ea99473a3c","Type":"ContainerDied","Data":"faa2eb358210a03d210e1cc2f480d4353c2f08d8714ff7b51690fec9822b93dd"} Feb 02 13:28:32 crc kubenswrapper[4955]: I0202 13:28:32.142282 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzm8g" event={"ID":"27884f6c-7aa5-45c0-8a12-59ea99473a3c","Type":"ContainerStarted","Data":"f941e741d3521650b79082ca5ea5bc6d080fd06656709280b40d7703a3a22cff"} Feb 02 13:28:32 crc kubenswrapper[4955]: I0202 13:28:32.161172 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kzm8g" podStartSLOduration=2.718619185 podStartE2EDuration="6.161145405s" podCreationTimestamp="2026-02-02 13:28:26 +0000 UTC" firstStartedPulling="2026-02-02 13:28:28.096862829 +0000 UTC m=+1559.009199279" lastFinishedPulling="2026-02-02 13:28:31.539389049 +0000 UTC m=+1562.451725499" observedRunningTime="2026-02-02 13:28:32.160804997 +0000 UTC m=+1563.073141447" watchObservedRunningTime="2026-02-02 13:28:32.161145405 +0000 UTC m=+1563.073481865" Feb 02 13:28:32 crc kubenswrapper[4955]: I0202 13:28:32.462867 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s99nz"] Feb 02 13:28:32 crc kubenswrapper[4955]: I0202 13:28:32.464679 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s99nz" Feb 02 13:28:32 crc kubenswrapper[4955]: I0202 13:28:32.478689 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s99nz"] Feb 02 13:28:32 crc kubenswrapper[4955]: I0202 13:28:32.546765 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4txhs\" (UniqueName: \"kubernetes.io/projected/39881146-bfb3-415c-99fe-589904b481b1-kube-api-access-4txhs\") pod \"redhat-operators-s99nz\" (UID: \"39881146-bfb3-415c-99fe-589904b481b1\") " pod="openshift-marketplace/redhat-operators-s99nz" Feb 02 13:28:32 crc kubenswrapper[4955]: I0202 13:28:32.546860 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39881146-bfb3-415c-99fe-589904b481b1-catalog-content\") pod \"redhat-operators-s99nz\" (UID: \"39881146-bfb3-415c-99fe-589904b481b1\") " pod="openshift-marketplace/redhat-operators-s99nz" Feb 02 13:28:32 crc kubenswrapper[4955]: I0202 13:28:32.546965 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39881146-bfb3-415c-99fe-589904b481b1-utilities\") pod \"redhat-operators-s99nz\" (UID: \"39881146-bfb3-415c-99fe-589904b481b1\") " pod="openshift-marketplace/redhat-operators-s99nz" Feb 02 13:28:32 crc kubenswrapper[4955]: I0202 13:28:32.648962 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4txhs\" (UniqueName: \"kubernetes.io/projected/39881146-bfb3-415c-99fe-589904b481b1-kube-api-access-4txhs\") pod \"redhat-operators-s99nz\" (UID: \"39881146-bfb3-415c-99fe-589904b481b1\") " pod="openshift-marketplace/redhat-operators-s99nz" Feb 02 13:28:32 crc kubenswrapper[4955]: I0202 13:28:32.649040 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39881146-bfb3-415c-99fe-589904b481b1-catalog-content\") pod \"redhat-operators-s99nz\" (UID: \"39881146-bfb3-415c-99fe-589904b481b1\") " pod="openshift-marketplace/redhat-operators-s99nz" Feb 02 13:28:32 crc kubenswrapper[4955]: I0202 13:28:32.649098 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39881146-bfb3-415c-99fe-589904b481b1-utilities\") pod \"redhat-operators-s99nz\" (UID: \"39881146-bfb3-415c-99fe-589904b481b1\") " pod="openshift-marketplace/redhat-operators-s99nz" Feb 02 13:28:32 crc kubenswrapper[4955]: I0202 13:28:32.649500 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39881146-bfb3-415c-99fe-589904b481b1-utilities\") pod \"redhat-operators-s99nz\" (UID: \"39881146-bfb3-415c-99fe-589904b481b1\") " pod="openshift-marketplace/redhat-operators-s99nz" Feb 02 13:28:32 crc kubenswrapper[4955]: I0202 13:28:32.649595 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39881146-bfb3-415c-99fe-589904b481b1-catalog-content\") pod \"redhat-operators-s99nz\" (UID: \"39881146-bfb3-415c-99fe-589904b481b1\") " pod="openshift-marketplace/redhat-operators-s99nz" Feb 02 13:28:32 crc kubenswrapper[4955]: I0202 13:28:32.671374 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4txhs\" (UniqueName: \"kubernetes.io/projected/39881146-bfb3-415c-99fe-589904b481b1-kube-api-access-4txhs\") pod \"redhat-operators-s99nz\" (UID: \"39881146-bfb3-415c-99fe-589904b481b1\") " pod="openshift-marketplace/redhat-operators-s99nz" Feb 02 13:28:32 crc kubenswrapper[4955]: I0202 13:28:32.780984 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s99nz" Feb 02 13:28:33 crc kubenswrapper[4955]: I0202 13:28:33.221827 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s99nz"] Feb 02 13:28:33 crc kubenswrapper[4955]: W0202 13:28:33.226699 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39881146_bfb3_415c_99fe_589904b481b1.slice/crio-e1e15d212571128a6fb072c249d2e1d04f1e0ebe33e08c5c72f283a1188a603c WatchSource:0}: Error finding container e1e15d212571128a6fb072c249d2e1d04f1e0ebe33e08c5c72f283a1188a603c: Status 404 returned error can't find the container with id e1e15d212571128a6fb072c249d2e1d04f1e0ebe33e08c5c72f283a1188a603c Feb 02 13:28:34 crc kubenswrapper[4955]: I0202 13:28:34.160035 4955 generic.go:334] "Generic (PLEG): container finished" podID="39881146-bfb3-415c-99fe-589904b481b1" containerID="d332f177ce8a270eb9102dd82b1aa3828d11f1cf1df7377aa519a144bf814f73" exitCode=0 Feb 02 13:28:34 crc kubenswrapper[4955]: I0202 13:28:34.160142 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s99nz" event={"ID":"39881146-bfb3-415c-99fe-589904b481b1","Type":"ContainerDied","Data":"d332f177ce8a270eb9102dd82b1aa3828d11f1cf1df7377aa519a144bf814f73"} Feb 02 13:28:34 crc kubenswrapper[4955]: I0202 13:28:34.160409 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s99nz" event={"ID":"39881146-bfb3-415c-99fe-589904b481b1","Type":"ContainerStarted","Data":"e1e15d212571128a6fb072c249d2e1d04f1e0ebe33e08c5c72f283a1188a603c"} Feb 02 13:28:35 crc kubenswrapper[4955]: I0202 13:28:35.170465 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s99nz" event={"ID":"39881146-bfb3-415c-99fe-589904b481b1","Type":"ContainerStarted","Data":"ce9139836ba1a0564ead17c44e03ff5a4b8ee680e15549c469e691399ceb0807"} Feb 02 13:28:36 crc kubenswrapper[4955]: I0202 13:28:36.831240 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kzm8g" Feb 02 13:28:36 crc kubenswrapper[4955]: I0202 13:28:36.832700 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kzm8g" Feb 02 13:28:36 crc kubenswrapper[4955]: I0202 13:28:36.875506 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kzm8g" Feb 02 13:28:37 crc kubenswrapper[4955]: I0202 13:28:37.192146 4955 generic.go:334] "Generic (PLEG): container finished" podID="39881146-bfb3-415c-99fe-589904b481b1" containerID="ce9139836ba1a0564ead17c44e03ff5a4b8ee680e15549c469e691399ceb0807" exitCode=0 Feb 02 13:28:37 crc kubenswrapper[4955]: I0202 13:28:37.192233 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s99nz" event={"ID":"39881146-bfb3-415c-99fe-589904b481b1","Type":"ContainerDied","Data":"ce9139836ba1a0564ead17c44e03ff5a4b8ee680e15549c469e691399ceb0807"} Feb 02 13:28:37 crc kubenswrapper[4955]: I0202 13:28:37.247590 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kzm8g" Feb 02 13:28:37 crc kubenswrapper[4955]: I0202 13:28:37.663471 4955 scope.go:117] "RemoveContainer" containerID="dcc095c9764baf9dc5b5dad2c9e7a3870622884b6b47c5e6b32fc574f79aecb6" Feb 02 13:28:37 crc kubenswrapper[4955]: I0202 13:28:37.694989 4955 scope.go:117] "RemoveContainer" containerID="d8ee3535d77f70e99eb54c38efbe3b431a8e68bab5d349c9c55a0df0bda3768d" Feb 02 13:28:37 crc kubenswrapper[4955]: I0202 13:28:37.716758 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:28:37 crc kubenswrapper[4955]: E0202 13:28:37.717095 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:28:37 crc kubenswrapper[4955]: I0202 13:28:37.751004 4955 scope.go:117] "RemoveContainer" containerID="1feb7d9e24726f65a8dc626bd665e9c2fca46e0ac33505c58d0d65765b4593b0" Feb 02 13:28:37 crc kubenswrapper[4955]: I0202 13:28:37.923208 4955 scope.go:117] "RemoveContainer" containerID="2de70f227665485fb9e717a948f6cb1fdf90ab88f09d2a9976259074ff3e57d9" Feb 02 13:28:37 crc kubenswrapper[4955]: I0202 13:28:37.980152 4955 scope.go:117] "RemoveContainer" containerID="91d8d9e707711fcae426e8b170cb80981d57c808662fb514ea1b081fb8151a98" Feb 02 13:28:38 crc kubenswrapper[4955]: I0202 13:28:38.011433 4955 scope.go:117] "RemoveContainer" containerID="914d456e1d094fad591bfeace5376b27ca7d57d2845570178729d1674a9b069c" Feb 02 13:28:38 crc kubenswrapper[4955]: I0202 13:28:38.064298 4955 scope.go:117] "RemoveContainer" containerID="40b43eb3ce14e9a9b154c05904ab9fcc6e1e8c89dc11591de887e31181cfff9f" Feb 02 13:28:38 crc kubenswrapper[4955]: I0202 13:28:38.093215 4955 scope.go:117] "RemoveContainer" containerID="12fccdd4d92d04e566dbf725ccb7473ff51377fc23b10abc40c5d5ac6a18e536" Feb 02 13:28:38 crc kubenswrapper[4955]: I0202 13:28:38.122550 4955 scope.go:117] "RemoveContainer" containerID="9431690c224249cc93aefe1fba0be8d28d2321d6020f9b450e3daf79f7f98943" Feb 02 13:28:38 crc kubenswrapper[4955]: I0202 13:28:38.166234 4955 scope.go:117] "RemoveContainer" containerID="bb0ef8ef062df0f5a85db38636519498c5989f2758195b2e688e00232107c930" Feb 02 13:28:38 crc kubenswrapper[4955]: I0202 13:28:38.196628 4955 scope.go:117] "RemoveContainer" containerID="eb90ddc4fc2e6e90c985a5e44d4818943bce8a1c90af823c2bac44597a51efca" Feb 02 13:28:38 crc kubenswrapper[4955]: I0202 13:28:38.239408 4955 scope.go:117] "RemoveContainer" containerID="665023b766e465377a8aa0f03e53b775038a8a3a7c04bbd47a953b3aa0ce77e6" Feb 02 13:28:38 crc kubenswrapper[4955]: I0202 13:28:38.265518 4955 scope.go:117] "RemoveContainer" containerID="765149ad9f9697994a8d85fd36e047189b0ce314d690370fc10e6f9971633193" Feb 02 13:28:38 crc kubenswrapper[4955]: I0202 13:28:38.294964 4955 scope.go:117] "RemoveContainer" containerID="ece2a490f5a6a52a96458ef7bad0fc2e3a57119e26b83a997ac70eadb4b94287" Feb 02 13:28:38 crc kubenswrapper[4955]: I0202 13:28:38.322180 4955 scope.go:117] "RemoveContainer" containerID="2c433a9e6e10c3f8a655d1ddefa7490235c12dfecec5721d15bcabcd10f4af87" Feb 02 13:28:38 crc kubenswrapper[4955]: I0202 13:28:38.346679 4955 scope.go:117] "RemoveContainer" containerID="21c3d26a19b7104e2eb1e527113b02315c07975d721a6723a7abc9daca8770ca" Feb 02 13:28:38 crc kubenswrapper[4955]: I0202 13:28:38.367181 4955 scope.go:117] "RemoveContainer" containerID="8a5fa4fdc43a62d9943ecb4f2d9a3e10b41f8492daa67c8ae6c7952a0ef3d9e7" Feb 02 13:28:38 crc kubenswrapper[4955]: I0202 13:28:38.388879 4955 scope.go:117] "RemoveContainer" containerID="46cf4d138b8cf856a6ce96e55d09ee446998137334c082356c24caac5d427089" Feb 02 13:28:38 crc kubenswrapper[4955]: I0202 13:28:38.406608 4955 scope.go:117] "RemoveContainer" containerID="c8be1efccc92b4fbd85ead057b56cb853a94cc68e72028a05c3e87430a9f375c" Feb 02 13:28:38 crc kubenswrapper[4955]: I0202 13:28:38.854339 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kzm8g"] Feb 02 13:28:39 crc kubenswrapper[4955]: I0202 13:28:39.243485 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s99nz" event={"ID":"39881146-bfb3-415c-99fe-589904b481b1","Type":"ContainerStarted","Data":"8f76ba80ea314cac7551dd345bdc069aa2fc5e5b96a5c92617fcfac0bfab585c"} Feb 02 13:28:39 crc kubenswrapper[4955]: I0202 13:28:39.243683 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kzm8g" podUID="27884f6c-7aa5-45c0-8a12-59ea99473a3c" containerName="registry-server" containerID="cri-o://f941e741d3521650b79082ca5ea5bc6d080fd06656709280b40d7703a3a22cff" gracePeriod=2 Feb 02 13:28:39 crc kubenswrapper[4955]: I0202 13:28:39.270705 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s99nz" podStartSLOduration=3.265712204 podStartE2EDuration="7.270687705s" podCreationTimestamp="2026-02-02 13:28:32 +0000 UTC" firstStartedPulling="2026-02-02 13:28:34.161400006 +0000 UTC m=+1565.073736456" lastFinishedPulling="2026-02-02 13:28:38.166375507 +0000 UTC m=+1569.078711957" observedRunningTime="2026-02-02 13:28:39.262594402 +0000 UTC m=+1570.174930862" watchObservedRunningTime="2026-02-02 13:28:39.270687705 +0000 UTC m=+1570.183024155" Feb 02 13:28:39 crc kubenswrapper[4955]: I0202 13:28:39.709316 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzm8g" Feb 02 13:28:39 crc kubenswrapper[4955]: I0202 13:28:39.787343 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27884f6c-7aa5-45c0-8a12-59ea99473a3c-utilities\") pod \"27884f6c-7aa5-45c0-8a12-59ea99473a3c\" (UID: \"27884f6c-7aa5-45c0-8a12-59ea99473a3c\") " Feb 02 13:28:39 crc kubenswrapper[4955]: I0202 13:28:39.787903 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27884f6c-7aa5-45c0-8a12-59ea99473a3c-catalog-content\") pod \"27884f6c-7aa5-45c0-8a12-59ea99473a3c\" (UID: \"27884f6c-7aa5-45c0-8a12-59ea99473a3c\") " Feb 02 13:28:39 crc kubenswrapper[4955]: I0202 13:28:39.788106 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z6pq\" (UniqueName: \"kubernetes.io/projected/27884f6c-7aa5-45c0-8a12-59ea99473a3c-kube-api-access-5z6pq\") pod \"27884f6c-7aa5-45c0-8a12-59ea99473a3c\" (UID: \"27884f6c-7aa5-45c0-8a12-59ea99473a3c\") " Feb 02 13:28:39 crc kubenswrapper[4955]: I0202 13:28:39.788272 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27884f6c-7aa5-45c0-8a12-59ea99473a3c-utilities" (OuterVolumeSpecName: "utilities") pod "27884f6c-7aa5-45c0-8a12-59ea99473a3c" (UID: "27884f6c-7aa5-45c0-8a12-59ea99473a3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:28:39 crc kubenswrapper[4955]: I0202 13:28:39.789237 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27884f6c-7aa5-45c0-8a12-59ea99473a3c-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:28:39 crc kubenswrapper[4955]: I0202 13:28:39.797813 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27884f6c-7aa5-45c0-8a12-59ea99473a3c-kube-api-access-5z6pq" (OuterVolumeSpecName: "kube-api-access-5z6pq") pod "27884f6c-7aa5-45c0-8a12-59ea99473a3c" (UID: "27884f6c-7aa5-45c0-8a12-59ea99473a3c"). InnerVolumeSpecName "kube-api-access-5z6pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:28:39 crc kubenswrapper[4955]: I0202 13:28:39.860133 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27884f6c-7aa5-45c0-8a12-59ea99473a3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "27884f6c-7aa5-45c0-8a12-59ea99473a3c" (UID: "27884f6c-7aa5-45c0-8a12-59ea99473a3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:28:39 crc kubenswrapper[4955]: I0202 13:28:39.891783 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27884f6c-7aa5-45c0-8a12-59ea99473a3c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:28:39 crc kubenswrapper[4955]: I0202 13:28:39.891830 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z6pq\" (UniqueName: \"kubernetes.io/projected/27884f6c-7aa5-45c0-8a12-59ea99473a3c-kube-api-access-5z6pq\") on node \"crc\" DevicePath \"\"" Feb 02 13:28:40 crc kubenswrapper[4955]: I0202 13:28:40.254357 4955 generic.go:334] "Generic (PLEG): container finished" podID="27884f6c-7aa5-45c0-8a12-59ea99473a3c" containerID="f941e741d3521650b79082ca5ea5bc6d080fd06656709280b40d7703a3a22cff" exitCode=0 Feb 02 13:28:40 crc kubenswrapper[4955]: I0202 13:28:40.254399 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzm8g" event={"ID":"27884f6c-7aa5-45c0-8a12-59ea99473a3c","Type":"ContainerDied","Data":"f941e741d3521650b79082ca5ea5bc6d080fd06656709280b40d7703a3a22cff"} Feb 02 13:28:40 crc kubenswrapper[4955]: I0202 13:28:40.254422 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzm8g" event={"ID":"27884f6c-7aa5-45c0-8a12-59ea99473a3c","Type":"ContainerDied","Data":"072bc51598550db95819c96aa64419f88393d8367be03e4bcedf041df18f8c33"} Feb 02 13:28:40 crc kubenswrapper[4955]: I0202 13:28:40.254440 4955 scope.go:117] "RemoveContainer" containerID="f941e741d3521650b79082ca5ea5bc6d080fd06656709280b40d7703a3a22cff" Feb 02 13:28:40 crc kubenswrapper[4955]: I0202 13:28:40.254610 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzm8g" Feb 02 13:28:40 crc kubenswrapper[4955]: I0202 13:28:40.283018 4955 scope.go:117] "RemoveContainer" containerID="faa2eb358210a03d210e1cc2f480d4353c2f08d8714ff7b51690fec9822b93dd" Feb 02 13:28:40 crc kubenswrapper[4955]: I0202 13:28:40.287482 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kzm8g"] Feb 02 13:28:40 crc kubenswrapper[4955]: I0202 13:28:40.300626 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kzm8g"] Feb 02 13:28:40 crc kubenswrapper[4955]: I0202 13:28:40.312151 4955 scope.go:117] "RemoveContainer" containerID="9742ad69dc9e2639f121fb4f53e22d3b745c833202f1bc32866444a9c4d57e1d" Feb 02 13:28:40 crc kubenswrapper[4955]: I0202 13:28:40.351728 4955 scope.go:117] "RemoveContainer" containerID="f941e741d3521650b79082ca5ea5bc6d080fd06656709280b40d7703a3a22cff" Feb 02 13:28:40 crc kubenswrapper[4955]: E0202 13:28:40.352232 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f941e741d3521650b79082ca5ea5bc6d080fd06656709280b40d7703a3a22cff\": container with ID starting with f941e741d3521650b79082ca5ea5bc6d080fd06656709280b40d7703a3a22cff not found: ID does not exist" containerID="f941e741d3521650b79082ca5ea5bc6d080fd06656709280b40d7703a3a22cff" Feb 02 13:28:40 crc kubenswrapper[4955]: I0202 13:28:40.352270 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f941e741d3521650b79082ca5ea5bc6d080fd06656709280b40d7703a3a22cff"} err="failed to get container status \"f941e741d3521650b79082ca5ea5bc6d080fd06656709280b40d7703a3a22cff\": rpc error: code = NotFound desc = could not find container \"f941e741d3521650b79082ca5ea5bc6d080fd06656709280b40d7703a3a22cff\": container with ID starting with f941e741d3521650b79082ca5ea5bc6d080fd06656709280b40d7703a3a22cff not found: ID does not exist" Feb 02 13:28:40 crc kubenswrapper[4955]: I0202 13:28:40.352299 4955 scope.go:117] "RemoveContainer" containerID="faa2eb358210a03d210e1cc2f480d4353c2f08d8714ff7b51690fec9822b93dd" Feb 02 13:28:40 crc kubenswrapper[4955]: E0202 13:28:40.352520 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faa2eb358210a03d210e1cc2f480d4353c2f08d8714ff7b51690fec9822b93dd\": container with ID starting with faa2eb358210a03d210e1cc2f480d4353c2f08d8714ff7b51690fec9822b93dd not found: ID does not exist" containerID="faa2eb358210a03d210e1cc2f480d4353c2f08d8714ff7b51690fec9822b93dd" Feb 02 13:28:40 crc kubenswrapper[4955]: I0202 13:28:40.352547 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faa2eb358210a03d210e1cc2f480d4353c2f08d8714ff7b51690fec9822b93dd"} err="failed to get container status \"faa2eb358210a03d210e1cc2f480d4353c2f08d8714ff7b51690fec9822b93dd\": rpc error: code = NotFound desc = could not find container \"faa2eb358210a03d210e1cc2f480d4353c2f08d8714ff7b51690fec9822b93dd\": container with ID starting with faa2eb358210a03d210e1cc2f480d4353c2f08d8714ff7b51690fec9822b93dd not found: ID does not exist" Feb 02 13:28:40 crc kubenswrapper[4955]: I0202 13:28:40.352623 4955 scope.go:117] "RemoveContainer" containerID="9742ad69dc9e2639f121fb4f53e22d3b745c833202f1bc32866444a9c4d57e1d" Feb 02 13:28:40 crc kubenswrapper[4955]: E0202 13:28:40.352861 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9742ad69dc9e2639f121fb4f53e22d3b745c833202f1bc32866444a9c4d57e1d\": container with ID starting with 9742ad69dc9e2639f121fb4f53e22d3b745c833202f1bc32866444a9c4d57e1d not found: ID does not exist" containerID="9742ad69dc9e2639f121fb4f53e22d3b745c833202f1bc32866444a9c4d57e1d" Feb 02 13:28:40 crc kubenswrapper[4955]: I0202 13:28:40.352889 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9742ad69dc9e2639f121fb4f53e22d3b745c833202f1bc32866444a9c4d57e1d"} err="failed to get container status \"9742ad69dc9e2639f121fb4f53e22d3b745c833202f1bc32866444a9c4d57e1d\": rpc error: code = NotFound desc = could not find container \"9742ad69dc9e2639f121fb4f53e22d3b745c833202f1bc32866444a9c4d57e1d\": container with ID starting with 9742ad69dc9e2639f121fb4f53e22d3b745c833202f1bc32866444a9c4d57e1d not found: ID does not exist" Feb 02 13:28:41 crc kubenswrapper[4955]: I0202 13:28:41.727646 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27884f6c-7aa5-45c0-8a12-59ea99473a3c" path="/var/lib/kubelet/pods/27884f6c-7aa5-45c0-8a12-59ea99473a3c/volumes" Feb 02 13:28:42 crc kubenswrapper[4955]: I0202 13:28:42.781393 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s99nz" Feb 02 13:28:42 crc kubenswrapper[4955]: I0202 13:28:42.782290 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s99nz" Feb 02 13:28:43 crc kubenswrapper[4955]: I0202 13:28:43.833880 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s99nz" podUID="39881146-bfb3-415c-99fe-589904b481b1" containerName="registry-server" probeResult="failure" output=< Feb 02 13:28:43 crc kubenswrapper[4955]: timeout: failed to connect service ":50051" within 1s Feb 02 13:28:43 crc kubenswrapper[4955]: > Feb 02 13:28:50 crc kubenswrapper[4955]: I0202 13:28:50.715677 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:28:50 crc kubenswrapper[4955]: E0202 13:28:50.716335 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:28:52 crc kubenswrapper[4955]: I0202 13:28:52.834500 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s99nz" Feb 02 13:28:52 crc kubenswrapper[4955]: I0202 13:28:52.893054 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s99nz" Feb 02 13:28:53 crc kubenswrapper[4955]: I0202 13:28:53.073044 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s99nz"] Feb 02 13:28:54 crc kubenswrapper[4955]: I0202 13:28:54.371589 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s99nz" podUID="39881146-bfb3-415c-99fe-589904b481b1" containerName="registry-server" containerID="cri-o://8f76ba80ea314cac7551dd345bdc069aa2fc5e5b96a5c92617fcfac0bfab585c" gracePeriod=2 Feb 02 13:28:54 crc kubenswrapper[4955]: I0202 13:28:54.856519 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s99nz" Feb 02 13:28:54 crc kubenswrapper[4955]: I0202 13:28:54.912890 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39881146-bfb3-415c-99fe-589904b481b1-catalog-content\") pod \"39881146-bfb3-415c-99fe-589904b481b1\" (UID: \"39881146-bfb3-415c-99fe-589904b481b1\") " Feb 02 13:28:54 crc kubenswrapper[4955]: I0202 13:28:54.913466 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39881146-bfb3-415c-99fe-589904b481b1-utilities\") pod \"39881146-bfb3-415c-99fe-589904b481b1\" (UID: \"39881146-bfb3-415c-99fe-589904b481b1\") " Feb 02 13:28:54 crc kubenswrapper[4955]: I0202 13:28:54.913583 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4txhs\" (UniqueName: \"kubernetes.io/projected/39881146-bfb3-415c-99fe-589904b481b1-kube-api-access-4txhs\") pod \"39881146-bfb3-415c-99fe-589904b481b1\" (UID: \"39881146-bfb3-415c-99fe-589904b481b1\") " Feb 02 13:28:54 crc kubenswrapper[4955]: I0202 13:28:54.914395 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39881146-bfb3-415c-99fe-589904b481b1-utilities" (OuterVolumeSpecName: "utilities") pod "39881146-bfb3-415c-99fe-589904b481b1" (UID: "39881146-bfb3-415c-99fe-589904b481b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:28:54 crc kubenswrapper[4955]: I0202 13:28:54.919272 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39881146-bfb3-415c-99fe-589904b481b1-kube-api-access-4txhs" (OuterVolumeSpecName: "kube-api-access-4txhs") pod "39881146-bfb3-415c-99fe-589904b481b1" (UID: "39881146-bfb3-415c-99fe-589904b481b1"). InnerVolumeSpecName "kube-api-access-4txhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:28:55 crc kubenswrapper[4955]: I0202 13:28:55.015818 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4txhs\" (UniqueName: \"kubernetes.io/projected/39881146-bfb3-415c-99fe-589904b481b1-kube-api-access-4txhs\") on node \"crc\" DevicePath \"\"" Feb 02 13:28:55 crc kubenswrapper[4955]: I0202 13:28:55.015846 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39881146-bfb3-415c-99fe-589904b481b1-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:28:55 crc kubenswrapper[4955]: I0202 13:28:55.071353 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39881146-bfb3-415c-99fe-589904b481b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39881146-bfb3-415c-99fe-589904b481b1" (UID: "39881146-bfb3-415c-99fe-589904b481b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:28:55 crc kubenswrapper[4955]: I0202 13:28:55.116911 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39881146-bfb3-415c-99fe-589904b481b1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:28:55 crc kubenswrapper[4955]: I0202 13:28:55.382200 4955 generic.go:334] "Generic (PLEG): container finished" podID="39881146-bfb3-415c-99fe-589904b481b1" containerID="8f76ba80ea314cac7551dd345bdc069aa2fc5e5b96a5c92617fcfac0bfab585c" exitCode=0 Feb 02 13:28:55 crc kubenswrapper[4955]: I0202 13:28:55.382248 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s99nz" event={"ID":"39881146-bfb3-415c-99fe-589904b481b1","Type":"ContainerDied","Data":"8f76ba80ea314cac7551dd345bdc069aa2fc5e5b96a5c92617fcfac0bfab585c"} Feb 02 13:28:55 crc kubenswrapper[4955]: I0202 13:28:55.382290 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s99nz" event={"ID":"39881146-bfb3-415c-99fe-589904b481b1","Type":"ContainerDied","Data":"e1e15d212571128a6fb072c249d2e1d04f1e0ebe33e08c5c72f283a1188a603c"} Feb 02 13:28:55 crc kubenswrapper[4955]: I0202 13:28:55.382291 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s99nz" Feb 02 13:28:55 crc kubenswrapper[4955]: I0202 13:28:55.382305 4955 scope.go:117] "RemoveContainer" containerID="8f76ba80ea314cac7551dd345bdc069aa2fc5e5b96a5c92617fcfac0bfab585c" Feb 02 13:28:55 crc kubenswrapper[4955]: I0202 13:28:55.418156 4955 scope.go:117] "RemoveContainer" containerID="ce9139836ba1a0564ead17c44e03ff5a4b8ee680e15549c469e691399ceb0807" Feb 02 13:28:55 crc kubenswrapper[4955]: I0202 13:28:55.425350 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s99nz"] Feb 02 13:28:55 crc kubenswrapper[4955]: I0202 13:28:55.437624 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s99nz"] Feb 02 13:28:55 crc kubenswrapper[4955]: I0202 13:28:55.444315 4955 scope.go:117] "RemoveContainer" containerID="d332f177ce8a270eb9102dd82b1aa3828d11f1cf1df7377aa519a144bf814f73" Feb 02 13:28:55 crc kubenswrapper[4955]: I0202 13:28:55.500840 4955 scope.go:117] "RemoveContainer" containerID="8f76ba80ea314cac7551dd345bdc069aa2fc5e5b96a5c92617fcfac0bfab585c" Feb 02 13:28:55 crc kubenswrapper[4955]: E0202 13:28:55.502528 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f76ba80ea314cac7551dd345bdc069aa2fc5e5b96a5c92617fcfac0bfab585c\": container with ID starting with 8f76ba80ea314cac7551dd345bdc069aa2fc5e5b96a5c92617fcfac0bfab585c not found: ID does not exist" containerID="8f76ba80ea314cac7551dd345bdc069aa2fc5e5b96a5c92617fcfac0bfab585c" Feb 02 13:28:55 crc kubenswrapper[4955]: I0202 13:28:55.502599 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f76ba80ea314cac7551dd345bdc069aa2fc5e5b96a5c92617fcfac0bfab585c"} err="failed to get container status \"8f76ba80ea314cac7551dd345bdc069aa2fc5e5b96a5c92617fcfac0bfab585c\": rpc error: code = NotFound desc = could not find container \"8f76ba80ea314cac7551dd345bdc069aa2fc5e5b96a5c92617fcfac0bfab585c\": container with ID starting with 8f76ba80ea314cac7551dd345bdc069aa2fc5e5b96a5c92617fcfac0bfab585c not found: ID does not exist" Feb 02 13:28:55 crc kubenswrapper[4955]: I0202 13:28:55.502631 4955 scope.go:117] "RemoveContainer" containerID="ce9139836ba1a0564ead17c44e03ff5a4b8ee680e15549c469e691399ceb0807" Feb 02 13:28:55 crc kubenswrapper[4955]: E0202 13:28:55.503225 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce9139836ba1a0564ead17c44e03ff5a4b8ee680e15549c469e691399ceb0807\": container with ID starting with ce9139836ba1a0564ead17c44e03ff5a4b8ee680e15549c469e691399ceb0807 not found: ID does not exist" containerID="ce9139836ba1a0564ead17c44e03ff5a4b8ee680e15549c469e691399ceb0807" Feb 02 13:28:55 crc kubenswrapper[4955]: I0202 13:28:55.503287 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce9139836ba1a0564ead17c44e03ff5a4b8ee680e15549c469e691399ceb0807"} err="failed to get container status \"ce9139836ba1a0564ead17c44e03ff5a4b8ee680e15549c469e691399ceb0807\": rpc error: code = NotFound desc = could not find container \"ce9139836ba1a0564ead17c44e03ff5a4b8ee680e15549c469e691399ceb0807\": container with ID starting with ce9139836ba1a0564ead17c44e03ff5a4b8ee680e15549c469e691399ceb0807 not found: ID does not exist" Feb 02 13:28:55 crc kubenswrapper[4955]: I0202 13:28:55.503317 4955 scope.go:117] "RemoveContainer" containerID="d332f177ce8a270eb9102dd82b1aa3828d11f1cf1df7377aa519a144bf814f73" Feb 02 13:28:55 crc kubenswrapper[4955]: E0202 13:28:55.503717 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d332f177ce8a270eb9102dd82b1aa3828d11f1cf1df7377aa519a144bf814f73\": container with ID starting with d332f177ce8a270eb9102dd82b1aa3828d11f1cf1df7377aa519a144bf814f73 not found: ID does not exist" containerID="d332f177ce8a270eb9102dd82b1aa3828d11f1cf1df7377aa519a144bf814f73" Feb 02 13:28:55 crc kubenswrapper[4955]: I0202 13:28:55.503752 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d332f177ce8a270eb9102dd82b1aa3828d11f1cf1df7377aa519a144bf814f73"} err="failed to get container status \"d332f177ce8a270eb9102dd82b1aa3828d11f1cf1df7377aa519a144bf814f73\": rpc error: code = NotFound desc = could not find container \"d332f177ce8a270eb9102dd82b1aa3828d11f1cf1df7377aa519a144bf814f73\": container with ID starting with d332f177ce8a270eb9102dd82b1aa3828d11f1cf1df7377aa519a144bf814f73 not found: ID does not exist" Feb 02 13:28:55 crc kubenswrapper[4955]: I0202 13:28:55.735836 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39881146-bfb3-415c-99fe-589904b481b1" path="/var/lib/kubelet/pods/39881146-bfb3-415c-99fe-589904b481b1/volumes" Feb 02 13:29:02 crc kubenswrapper[4955]: I0202 13:29:02.715894 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:29:02 crc kubenswrapper[4955]: E0202 13:29:02.716641 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:29:06 crc kubenswrapper[4955]: I0202 13:29:06.082721 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-jdrff"] Feb 02 13:29:06 crc kubenswrapper[4955]: I0202 13:29:06.094844 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-rvmkt"] Feb 02 13:29:06 crc kubenswrapper[4955]: I0202 13:29:06.103743 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-jdrff"] Feb 02 13:29:06 crc kubenswrapper[4955]: I0202 13:29:06.111437 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wjnkc"] Feb 02 13:29:06 crc kubenswrapper[4955]: I0202 13:29:06.123830 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-rvmkt"] Feb 02 13:29:06 crc kubenswrapper[4955]: I0202 13:29:06.131619 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wjnkc"] Feb 02 13:29:06 crc kubenswrapper[4955]: I0202 13:29:06.139113 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-9wmph"] Feb 02 13:29:06 crc kubenswrapper[4955]: I0202 13:29:06.145910 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-9wmph"] Feb 02 13:29:07 crc kubenswrapper[4955]: I0202 13:29:07.744532 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01fe12bd-03cd-402a-89a5-db886a443423" path="/var/lib/kubelet/pods/01fe12bd-03cd-402a-89a5-db886a443423/volumes" Feb 02 13:29:07 crc kubenswrapper[4955]: I0202 13:29:07.745215 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72b66af4-aa8a-4739-8ed1-d55f066b5505" path="/var/lib/kubelet/pods/72b66af4-aa8a-4739-8ed1-d55f066b5505/volumes" Feb 02 13:29:07 crc kubenswrapper[4955]: I0202 13:29:07.745761 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8727b892-9204-4236-9e54-80af117730db" path="/var/lib/kubelet/pods/8727b892-9204-4236-9e54-80af117730db/volumes" Feb 02 13:29:07 crc kubenswrapper[4955]: I0202 13:29:07.746704 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a474135c-7a61-46ee-af96-680f7139539b" path="/var/lib/kubelet/pods/a474135c-7a61-46ee-af96-680f7139539b/volumes" Feb 02 13:29:13 crc kubenswrapper[4955]: I0202 13:29:13.227663 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hcrgx"] Feb 02 13:29:13 crc kubenswrapper[4955]: E0202 13:29:13.228469 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27884f6c-7aa5-45c0-8a12-59ea99473a3c" containerName="extract-content" Feb 02 13:29:13 crc kubenswrapper[4955]: I0202 13:29:13.228483 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="27884f6c-7aa5-45c0-8a12-59ea99473a3c" containerName="extract-content" Feb 02 13:29:13 crc kubenswrapper[4955]: E0202 13:29:13.228504 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27884f6c-7aa5-45c0-8a12-59ea99473a3c" containerName="extract-utilities" Feb 02 13:29:13 crc kubenswrapper[4955]: I0202 13:29:13.228510 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="27884f6c-7aa5-45c0-8a12-59ea99473a3c" containerName="extract-utilities" Feb 02 13:29:13 crc kubenswrapper[4955]: E0202 13:29:13.228521 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39881146-bfb3-415c-99fe-589904b481b1" containerName="extract-content" Feb 02 13:29:13 crc kubenswrapper[4955]: I0202 13:29:13.228527 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="39881146-bfb3-415c-99fe-589904b481b1" containerName="extract-content" Feb 02 13:29:13 crc kubenswrapper[4955]: E0202 13:29:13.228545 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27884f6c-7aa5-45c0-8a12-59ea99473a3c" containerName="registry-server" Feb 02 13:29:13 crc kubenswrapper[4955]: I0202 13:29:13.228564 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="27884f6c-7aa5-45c0-8a12-59ea99473a3c" containerName="registry-server" Feb 02 13:29:13 crc kubenswrapper[4955]: E0202 13:29:13.228579 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39881146-bfb3-415c-99fe-589904b481b1" containerName="registry-server" Feb 02 13:29:13 crc kubenswrapper[4955]: I0202 13:29:13.228584 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="39881146-bfb3-415c-99fe-589904b481b1" containerName="registry-server" Feb 02 13:29:13 crc kubenswrapper[4955]: E0202 13:29:13.228596 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39881146-bfb3-415c-99fe-589904b481b1" containerName="extract-utilities" Feb 02 13:29:13 crc kubenswrapper[4955]: I0202 13:29:13.228601 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="39881146-bfb3-415c-99fe-589904b481b1" containerName="extract-utilities" Feb 02 13:29:13 crc kubenswrapper[4955]: I0202 13:29:13.228772 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="39881146-bfb3-415c-99fe-589904b481b1" containerName="registry-server" Feb 02 13:29:13 crc kubenswrapper[4955]: I0202 13:29:13.228795 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="27884f6c-7aa5-45c0-8a12-59ea99473a3c" containerName="registry-server" Feb 02 13:29:13 crc kubenswrapper[4955]: I0202 13:29:13.230073 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hcrgx" Feb 02 13:29:13 crc kubenswrapper[4955]: I0202 13:29:13.241938 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hcrgx"] Feb 02 13:29:13 crc kubenswrapper[4955]: I0202 13:29:13.352220 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c89545b-f0fb-4e94-b3a6-5890631ead14-utilities\") pod \"community-operators-hcrgx\" (UID: \"6c89545b-f0fb-4e94-b3a6-5890631ead14\") " pod="openshift-marketplace/community-operators-hcrgx" Feb 02 13:29:13 crc kubenswrapper[4955]: I0202 13:29:13.352755 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c89545b-f0fb-4e94-b3a6-5890631ead14-catalog-content\") pod \"community-operators-hcrgx\" (UID: \"6c89545b-f0fb-4e94-b3a6-5890631ead14\") " pod="openshift-marketplace/community-operators-hcrgx" Feb 02 13:29:13 crc kubenswrapper[4955]: I0202 13:29:13.352892 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjrd9\" (UniqueName: \"kubernetes.io/projected/6c89545b-f0fb-4e94-b3a6-5890631ead14-kube-api-access-bjrd9\") pod \"community-operators-hcrgx\" (UID: \"6c89545b-f0fb-4e94-b3a6-5890631ead14\") " pod="openshift-marketplace/community-operators-hcrgx" Feb 02 13:29:13 crc kubenswrapper[4955]: I0202 13:29:13.454184 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c89545b-f0fb-4e94-b3a6-5890631ead14-catalog-content\") pod \"community-operators-hcrgx\" (UID: \"6c89545b-f0fb-4e94-b3a6-5890631ead14\") " pod="openshift-marketplace/community-operators-hcrgx" Feb 02 13:29:13 crc kubenswrapper[4955]: I0202 13:29:13.454226 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjrd9\" (UniqueName: \"kubernetes.io/projected/6c89545b-f0fb-4e94-b3a6-5890631ead14-kube-api-access-bjrd9\") pod \"community-operators-hcrgx\" (UID: \"6c89545b-f0fb-4e94-b3a6-5890631ead14\") " pod="openshift-marketplace/community-operators-hcrgx" Feb 02 13:29:13 crc kubenswrapper[4955]: I0202 13:29:13.454289 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c89545b-f0fb-4e94-b3a6-5890631ead14-utilities\") pod \"community-operators-hcrgx\" (UID: \"6c89545b-f0fb-4e94-b3a6-5890631ead14\") " pod="openshift-marketplace/community-operators-hcrgx" Feb 02 13:29:13 crc kubenswrapper[4955]: I0202 13:29:13.454963 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c89545b-f0fb-4e94-b3a6-5890631ead14-catalog-content\") pod \"community-operators-hcrgx\" (UID: \"6c89545b-f0fb-4e94-b3a6-5890631ead14\") " pod="openshift-marketplace/community-operators-hcrgx" Feb 02 13:29:13 crc kubenswrapper[4955]: I0202 13:29:13.454973 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c89545b-f0fb-4e94-b3a6-5890631ead14-utilities\") pod \"community-operators-hcrgx\" (UID: \"6c89545b-f0fb-4e94-b3a6-5890631ead14\") " pod="openshift-marketplace/community-operators-hcrgx" Feb 02 13:29:13 crc kubenswrapper[4955]: I0202 13:29:13.487311 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjrd9\" (UniqueName: \"kubernetes.io/projected/6c89545b-f0fb-4e94-b3a6-5890631ead14-kube-api-access-bjrd9\") pod \"community-operators-hcrgx\" (UID: \"6c89545b-f0fb-4e94-b3a6-5890631ead14\") " pod="openshift-marketplace/community-operators-hcrgx" Feb 02 13:29:13 crc kubenswrapper[4955]: I0202 13:29:13.549161 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hcrgx" Feb 02 13:29:14 crc kubenswrapper[4955]: I0202 13:29:14.064651 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hcrgx"] Feb 02 13:29:14 crc kubenswrapper[4955]: I0202 13:29:14.547600 4955 generic.go:334] "Generic (PLEG): container finished" podID="6c89545b-f0fb-4e94-b3a6-5890631ead14" containerID="7360b4ad385353020f9d38424e1b7ba3cd2f7a81c6e44ed29da0bb0a09ecd5c0" exitCode=0 Feb 02 13:29:14 crc kubenswrapper[4955]: I0202 13:29:14.547680 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcrgx" event={"ID":"6c89545b-f0fb-4e94-b3a6-5890631ead14","Type":"ContainerDied","Data":"7360b4ad385353020f9d38424e1b7ba3cd2f7a81c6e44ed29da0bb0a09ecd5c0"} Feb 02 13:29:14 crc kubenswrapper[4955]: I0202 13:29:14.548085 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcrgx" event={"ID":"6c89545b-f0fb-4e94-b3a6-5890631ead14","Type":"ContainerStarted","Data":"f38e616693ebdd0bed68d0b1bb372731f636243fc53015134c530d9bfe980de4"} Feb 02 13:29:15 crc kubenswrapper[4955]: I0202 13:29:15.560622 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcrgx" event={"ID":"6c89545b-f0fb-4e94-b3a6-5890631ead14","Type":"ContainerStarted","Data":"2cf547b34f74b45bb9bf74a61a95a7df1be6c13e43c4b84910464c88f7ab609b"} Feb 02 13:29:15 crc kubenswrapper[4955]: I0202 13:29:15.716247 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:29:15 crc kubenswrapper[4955]: E0202 13:29:15.716712 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:29:16 crc kubenswrapper[4955]: I0202 13:29:16.572012 4955 generic.go:334] "Generic (PLEG): container finished" podID="6c89545b-f0fb-4e94-b3a6-5890631ead14" containerID="2cf547b34f74b45bb9bf74a61a95a7df1be6c13e43c4b84910464c88f7ab609b" exitCode=0 Feb 02 13:29:16 crc kubenswrapper[4955]: I0202 13:29:16.572069 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcrgx" event={"ID":"6c89545b-f0fb-4e94-b3a6-5890631ead14","Type":"ContainerDied","Data":"2cf547b34f74b45bb9bf74a61a95a7df1be6c13e43c4b84910464c88f7ab609b"} Feb 02 13:29:17 crc kubenswrapper[4955]: I0202 13:29:17.585749 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcrgx" event={"ID":"6c89545b-f0fb-4e94-b3a6-5890631ead14","Type":"ContainerStarted","Data":"3f1a6e4c242a4e153d68017800689e6c6ffdff8aabd60bc9704dfe0da1a2ffab"} Feb 02 13:29:17 crc kubenswrapper[4955]: I0202 13:29:17.614628 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hcrgx" podStartSLOduration=2.114450658 podStartE2EDuration="4.614605773s" podCreationTimestamp="2026-02-02 13:29:13 +0000 UTC" firstStartedPulling="2026-02-02 13:29:14.549842422 +0000 UTC m=+1605.462178872" lastFinishedPulling="2026-02-02 13:29:17.049997537 +0000 UTC m=+1607.962333987" observedRunningTime="2026-02-02 13:29:17.609230939 +0000 UTC m=+1608.521567389" watchObservedRunningTime="2026-02-02 13:29:17.614605773 +0000 UTC m=+1608.526942223" Feb 02 13:29:23 crc kubenswrapper[4955]: I0202 13:29:23.550341 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hcrgx" Feb 02 13:29:23 crc kubenswrapper[4955]: I0202 13:29:23.550936 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hcrgx" Feb 02 13:29:23 crc kubenswrapper[4955]: I0202 13:29:23.603921 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hcrgx" Feb 02 13:29:23 crc kubenswrapper[4955]: I0202 13:29:23.678036 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hcrgx" Feb 02 13:29:23 crc kubenswrapper[4955]: I0202 13:29:23.835798 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hcrgx"] Feb 02 13:29:25 crc kubenswrapper[4955]: I0202 13:29:25.650442 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hcrgx" podUID="6c89545b-f0fb-4e94-b3a6-5890631ead14" containerName="registry-server" containerID="cri-o://3f1a6e4c242a4e153d68017800689e6c6ffdff8aabd60bc9704dfe0da1a2ffab" gracePeriod=2 Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.049967 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-4q9j2"] Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.059878 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-wjvr2"] Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.073006 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-4q9j2"] Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.080702 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-wjvr2"] Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.328390 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hcrgx" Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.419764 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjrd9\" (UniqueName: \"kubernetes.io/projected/6c89545b-f0fb-4e94-b3a6-5890631ead14-kube-api-access-bjrd9\") pod \"6c89545b-f0fb-4e94-b3a6-5890631ead14\" (UID: \"6c89545b-f0fb-4e94-b3a6-5890631ead14\") " Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.420120 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c89545b-f0fb-4e94-b3a6-5890631ead14-catalog-content\") pod \"6c89545b-f0fb-4e94-b3a6-5890631ead14\" (UID: \"6c89545b-f0fb-4e94-b3a6-5890631ead14\") " Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.420239 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c89545b-f0fb-4e94-b3a6-5890631ead14-utilities\") pod \"6c89545b-f0fb-4e94-b3a6-5890631ead14\" (UID: \"6c89545b-f0fb-4e94-b3a6-5890631ead14\") " Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.420897 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c89545b-f0fb-4e94-b3a6-5890631ead14-utilities" (OuterVolumeSpecName: "utilities") pod "6c89545b-f0fb-4e94-b3a6-5890631ead14" (UID: "6c89545b-f0fb-4e94-b3a6-5890631ead14"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.428417 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c89545b-f0fb-4e94-b3a6-5890631ead14-kube-api-access-bjrd9" (OuterVolumeSpecName: "kube-api-access-bjrd9") pod "6c89545b-f0fb-4e94-b3a6-5890631ead14" (UID: "6c89545b-f0fb-4e94-b3a6-5890631ead14"). InnerVolumeSpecName "kube-api-access-bjrd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.477140 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c89545b-f0fb-4e94-b3a6-5890631ead14-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c89545b-f0fb-4e94-b3a6-5890631ead14" (UID: "6c89545b-f0fb-4e94-b3a6-5890631ead14"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.522786 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjrd9\" (UniqueName: \"kubernetes.io/projected/6c89545b-f0fb-4e94-b3a6-5890631ead14-kube-api-access-bjrd9\") on node \"crc\" DevicePath \"\"" Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.523078 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c89545b-f0fb-4e94-b3a6-5890631ead14-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.523145 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c89545b-f0fb-4e94-b3a6-5890631ead14-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.661719 4955 generic.go:334] "Generic (PLEG): container finished" podID="6c89545b-f0fb-4e94-b3a6-5890631ead14" containerID="3f1a6e4c242a4e153d68017800689e6c6ffdff8aabd60bc9704dfe0da1a2ffab" exitCode=0 Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.661762 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcrgx" event={"ID":"6c89545b-f0fb-4e94-b3a6-5890631ead14","Type":"ContainerDied","Data":"3f1a6e4c242a4e153d68017800689e6c6ffdff8aabd60bc9704dfe0da1a2ffab"} Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.661817 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcrgx" event={"ID":"6c89545b-f0fb-4e94-b3a6-5890631ead14","Type":"ContainerDied","Data":"f38e616693ebdd0bed68d0b1bb372731f636243fc53015134c530d9bfe980de4"} Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.661844 4955 scope.go:117] "RemoveContainer" containerID="3f1a6e4c242a4e153d68017800689e6c6ffdff8aabd60bc9704dfe0da1a2ffab" Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.662747 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hcrgx" Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.680322 4955 scope.go:117] "RemoveContainer" containerID="2cf547b34f74b45bb9bf74a61a95a7df1be6c13e43c4b84910464c88f7ab609b" Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.705105 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hcrgx"] Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.711898 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hcrgx"] Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.712793 4955 scope.go:117] "RemoveContainer" containerID="7360b4ad385353020f9d38424e1b7ba3cd2f7a81c6e44ed29da0bb0a09ecd5c0" Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.716971 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:29:26 crc kubenswrapper[4955]: E0202 13:29:26.717327 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.745199 4955 scope.go:117] "RemoveContainer" containerID="3f1a6e4c242a4e153d68017800689e6c6ffdff8aabd60bc9704dfe0da1a2ffab" Feb 02 13:29:26 crc kubenswrapper[4955]: E0202 13:29:26.745692 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f1a6e4c242a4e153d68017800689e6c6ffdff8aabd60bc9704dfe0da1a2ffab\": container with ID starting with 3f1a6e4c242a4e153d68017800689e6c6ffdff8aabd60bc9704dfe0da1a2ffab not found: ID does not exist" containerID="3f1a6e4c242a4e153d68017800689e6c6ffdff8aabd60bc9704dfe0da1a2ffab" Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.745724 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f1a6e4c242a4e153d68017800689e6c6ffdff8aabd60bc9704dfe0da1a2ffab"} err="failed to get container status \"3f1a6e4c242a4e153d68017800689e6c6ffdff8aabd60bc9704dfe0da1a2ffab\": rpc error: code = NotFound desc = could not find container \"3f1a6e4c242a4e153d68017800689e6c6ffdff8aabd60bc9704dfe0da1a2ffab\": container with ID starting with 3f1a6e4c242a4e153d68017800689e6c6ffdff8aabd60bc9704dfe0da1a2ffab not found: ID does not exist" Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.745748 4955 scope.go:117] "RemoveContainer" containerID="2cf547b34f74b45bb9bf74a61a95a7df1be6c13e43c4b84910464c88f7ab609b" Feb 02 13:29:26 crc kubenswrapper[4955]: E0202 13:29:26.746065 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cf547b34f74b45bb9bf74a61a95a7df1be6c13e43c4b84910464c88f7ab609b\": container with ID starting with 2cf547b34f74b45bb9bf74a61a95a7df1be6c13e43c4b84910464c88f7ab609b not found: ID does not exist" containerID="2cf547b34f74b45bb9bf74a61a95a7df1be6c13e43c4b84910464c88f7ab609b" Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.746083 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cf547b34f74b45bb9bf74a61a95a7df1be6c13e43c4b84910464c88f7ab609b"} err="failed to get container status \"2cf547b34f74b45bb9bf74a61a95a7df1be6c13e43c4b84910464c88f7ab609b\": rpc error: code = NotFound desc = could not find container \"2cf547b34f74b45bb9bf74a61a95a7df1be6c13e43c4b84910464c88f7ab609b\": container with ID starting with 2cf547b34f74b45bb9bf74a61a95a7df1be6c13e43c4b84910464c88f7ab609b not found: ID does not exist" Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.746094 4955 scope.go:117] "RemoveContainer" containerID="7360b4ad385353020f9d38424e1b7ba3cd2f7a81c6e44ed29da0bb0a09ecd5c0" Feb 02 13:29:26 crc kubenswrapper[4955]: E0202 13:29:26.746298 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7360b4ad385353020f9d38424e1b7ba3cd2f7a81c6e44ed29da0bb0a09ecd5c0\": container with ID starting with 7360b4ad385353020f9d38424e1b7ba3cd2f7a81c6e44ed29da0bb0a09ecd5c0 not found: ID does not exist" containerID="7360b4ad385353020f9d38424e1b7ba3cd2f7a81c6e44ed29da0bb0a09ecd5c0" Feb 02 13:29:26 crc kubenswrapper[4955]: I0202 13:29:26.746317 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7360b4ad385353020f9d38424e1b7ba3cd2f7a81c6e44ed29da0bb0a09ecd5c0"} err="failed to get container status \"7360b4ad385353020f9d38424e1b7ba3cd2f7a81c6e44ed29da0bb0a09ecd5c0\": rpc error: code = NotFound desc = could not find container \"7360b4ad385353020f9d38424e1b7ba3cd2f7a81c6e44ed29da0bb0a09ecd5c0\": container with ID starting with 7360b4ad385353020f9d38424e1b7ba3cd2f7a81c6e44ed29da0bb0a09ecd5c0 not found: ID does not exist" Feb 02 13:29:27 crc kubenswrapper[4955]: I0202 13:29:27.727853 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="462f37c8-5909-418b-bf1f-58af764957ab" path="/var/lib/kubelet/pods/462f37c8-5909-418b-bf1f-58af764957ab/volumes" Feb 02 13:29:27 crc kubenswrapper[4955]: I0202 13:29:27.729521 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c89545b-f0fb-4e94-b3a6-5890631ead14" path="/var/lib/kubelet/pods/6c89545b-f0fb-4e94-b3a6-5890631ead14/volumes" Feb 02 13:29:27 crc kubenswrapper[4955]: I0202 13:29:27.730445 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8aafab-3905-4e44-ba3e-134253a38a60" path="/var/lib/kubelet/pods/fc8aafab-3905-4e44-ba3e-134253a38a60/volumes" Feb 02 13:29:33 crc kubenswrapper[4955]: I0202 13:29:33.745350 4955 generic.go:334] "Generic (PLEG): container finished" podID="e6c973ae-9b03-4511-abfc-360377684859" containerID="9bac42a8f04b07ebec67af6254e63ac7bf110fb5833a5ec359a4d8ef445df9d8" exitCode=0 Feb 02 13:29:33 crc kubenswrapper[4955]: I0202 13:29:33.745893 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb" event={"ID":"e6c973ae-9b03-4511-abfc-360377684859","Type":"ContainerDied","Data":"9bac42a8f04b07ebec67af6254e63ac7bf110fb5833a5ec359a4d8ef445df9d8"} Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.118365 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb" Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.333819 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw6tk\" (UniqueName: \"kubernetes.io/projected/e6c973ae-9b03-4511-abfc-360377684859-kube-api-access-mw6tk\") pod \"e6c973ae-9b03-4511-abfc-360377684859\" (UID: \"e6c973ae-9b03-4511-abfc-360377684859\") " Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.334040 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6c973ae-9b03-4511-abfc-360377684859-ssh-key-openstack-edpm-ipam\") pod \"e6c973ae-9b03-4511-abfc-360377684859\" (UID: \"e6c973ae-9b03-4511-abfc-360377684859\") " Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.334091 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6c973ae-9b03-4511-abfc-360377684859-inventory\") pod \"e6c973ae-9b03-4511-abfc-360377684859\" (UID: \"e6c973ae-9b03-4511-abfc-360377684859\") " Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.345454 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6c973ae-9b03-4511-abfc-360377684859-kube-api-access-mw6tk" (OuterVolumeSpecName: "kube-api-access-mw6tk") pod "e6c973ae-9b03-4511-abfc-360377684859" (UID: "e6c973ae-9b03-4511-abfc-360377684859"). InnerVolumeSpecName "kube-api-access-mw6tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.362825 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6c973ae-9b03-4511-abfc-360377684859-inventory" (OuterVolumeSpecName: "inventory") pod "e6c973ae-9b03-4511-abfc-360377684859" (UID: "e6c973ae-9b03-4511-abfc-360377684859"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.363275 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6c973ae-9b03-4511-abfc-360377684859-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e6c973ae-9b03-4511-abfc-360377684859" (UID: "e6c973ae-9b03-4511-abfc-360377684859"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.436315 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw6tk\" (UniqueName: \"kubernetes.io/projected/e6c973ae-9b03-4511-abfc-360377684859-kube-api-access-mw6tk\") on node \"crc\" DevicePath \"\"" Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.436345 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6c973ae-9b03-4511-abfc-360377684859-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.436355 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6c973ae-9b03-4511-abfc-360377684859-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.761566 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb" event={"ID":"e6c973ae-9b03-4511-abfc-360377684859","Type":"ContainerDied","Data":"e78a9213054005d3e04ee815727758ec6c2c1d7940914433125d6a3a088a6a24"} Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.761605 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e78a9213054005d3e04ee815727758ec6c2c1d7940914433125d6a3a088a6a24" Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.761625 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb" Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.844334 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nfqll"] Feb 02 13:29:35 crc kubenswrapper[4955]: E0202 13:29:35.844724 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c89545b-f0fb-4e94-b3a6-5890631ead14" containerName="extract-utilities" Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.844739 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c89545b-f0fb-4e94-b3a6-5890631ead14" containerName="extract-utilities" Feb 02 13:29:35 crc kubenswrapper[4955]: E0202 13:29:35.844770 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6c973ae-9b03-4511-abfc-360377684859" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.844777 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6c973ae-9b03-4511-abfc-360377684859" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 13:29:35 crc kubenswrapper[4955]: E0202 13:29:35.844789 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c89545b-f0fb-4e94-b3a6-5890631ead14" containerName="extract-content" Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.844795 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c89545b-f0fb-4e94-b3a6-5890631ead14" containerName="extract-content" Feb 02 13:29:35 crc kubenswrapper[4955]: E0202 13:29:35.844807 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c89545b-f0fb-4e94-b3a6-5890631ead14" containerName="registry-server" Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.844813 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c89545b-f0fb-4e94-b3a6-5890631ead14" containerName="registry-server" Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.845086 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c89545b-f0fb-4e94-b3a6-5890631ead14" containerName="registry-server" Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.845102 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6c973ae-9b03-4511-abfc-360377684859" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.845736 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nfqll" Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.851656 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.851816 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.851913 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-65wvh" Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.861287 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.862823 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nfqll"] Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.946516 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34b65653-8bae-4ebf-a7c8-1de410bed9ac-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nfqll\" (UID: \"34b65653-8bae-4ebf-a7c8-1de410bed9ac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nfqll" Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.947007 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34b65653-8bae-4ebf-a7c8-1de410bed9ac-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nfqll\" (UID: \"34b65653-8bae-4ebf-a7c8-1de410bed9ac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nfqll" Feb 02 13:29:35 crc kubenswrapper[4955]: I0202 13:29:35.947041 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6566f\" (UniqueName: \"kubernetes.io/projected/34b65653-8bae-4ebf-a7c8-1de410bed9ac-kube-api-access-6566f\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nfqll\" (UID: \"34b65653-8bae-4ebf-a7c8-1de410bed9ac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nfqll" Feb 02 13:29:36 crc kubenswrapper[4955]: I0202 13:29:36.048455 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34b65653-8bae-4ebf-a7c8-1de410bed9ac-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nfqll\" (UID: \"34b65653-8bae-4ebf-a7c8-1de410bed9ac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nfqll" Feb 02 13:29:36 crc kubenswrapper[4955]: I0202 13:29:36.048830 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6566f\" (UniqueName: \"kubernetes.io/projected/34b65653-8bae-4ebf-a7c8-1de410bed9ac-kube-api-access-6566f\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nfqll\" (UID: \"34b65653-8bae-4ebf-a7c8-1de410bed9ac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nfqll" Feb 02 13:29:36 crc kubenswrapper[4955]: I0202 13:29:36.049046 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34b65653-8bae-4ebf-a7c8-1de410bed9ac-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nfqll\" (UID: \"34b65653-8bae-4ebf-a7c8-1de410bed9ac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nfqll" Feb 02 13:29:36 crc kubenswrapper[4955]: I0202 13:29:36.054266 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34b65653-8bae-4ebf-a7c8-1de410bed9ac-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nfqll\" (UID: \"34b65653-8bae-4ebf-a7c8-1de410bed9ac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nfqll" Feb 02 13:29:36 crc kubenswrapper[4955]: I0202 13:29:36.059955 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34b65653-8bae-4ebf-a7c8-1de410bed9ac-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nfqll\" (UID: \"34b65653-8bae-4ebf-a7c8-1de410bed9ac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nfqll" Feb 02 13:29:36 crc kubenswrapper[4955]: I0202 13:29:36.077266 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6566f\" (UniqueName: \"kubernetes.io/projected/34b65653-8bae-4ebf-a7c8-1de410bed9ac-kube-api-access-6566f\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nfqll\" (UID: \"34b65653-8bae-4ebf-a7c8-1de410bed9ac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nfqll" Feb 02 13:29:36 crc kubenswrapper[4955]: I0202 13:29:36.161285 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nfqll" Feb 02 13:29:36 crc kubenswrapper[4955]: I0202 13:29:36.655772 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nfqll"] Feb 02 13:29:36 crc kubenswrapper[4955]: I0202 13:29:36.772622 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nfqll" event={"ID":"34b65653-8bae-4ebf-a7c8-1de410bed9ac","Type":"ContainerStarted","Data":"2b6de6e1fc54f56f73978ebbc4fea24ebd8b118cd2643883eb936cb14218014c"} Feb 02 13:29:37 crc kubenswrapper[4955]: I0202 13:29:37.782130 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nfqll" event={"ID":"34b65653-8bae-4ebf-a7c8-1de410bed9ac","Type":"ContainerStarted","Data":"5d0c5097639247159b5c021dde19fd59f70f29edd58dd82d91184f0c069f0f90"} Feb 02 13:29:37 crc kubenswrapper[4955]: I0202 13:29:37.799676 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nfqll" podStartSLOduration=2.061439528 podStartE2EDuration="2.799661851s" podCreationTimestamp="2026-02-02 13:29:35 +0000 UTC" firstStartedPulling="2026-02-02 13:29:36.658219973 +0000 UTC m=+1627.570556423" lastFinishedPulling="2026-02-02 13:29:37.396442296 +0000 UTC m=+1628.308778746" observedRunningTime="2026-02-02 13:29:37.796272006 +0000 UTC m=+1628.708608476" watchObservedRunningTime="2026-02-02 13:29:37.799661851 +0000 UTC m=+1628.711998301" Feb 02 13:29:38 crc kubenswrapper[4955]: I0202 13:29:38.716499 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:29:38 crc kubenswrapper[4955]: E0202 13:29:38.716897 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:29:38 crc kubenswrapper[4955]: I0202 13:29:38.777741 4955 scope.go:117] "RemoveContainer" containerID="c2a6004a394285d858013a2b0d9bde8c131dc4196ac43ddfd67ad2080add130e" Feb 02 13:29:39 crc kubenswrapper[4955]: I0202 13:29:39.009139 4955 scope.go:117] "RemoveContainer" containerID="ace3d1ee29967a36470f8e3b9837d86b184440d57d8a834133cc9b1f0c2f717d" Feb 02 13:29:39 crc kubenswrapper[4955]: I0202 13:29:39.042890 4955 scope.go:117] "RemoveContainer" containerID="41c8a504526ef5a12d5e594f03cf85cbbe88b707858ccf962c333b46f02da204" Feb 02 13:29:39 crc kubenswrapper[4955]: I0202 13:29:39.100723 4955 scope.go:117] "RemoveContainer" containerID="2a7d51fff74b22d2db13c175da538155113d9a3c971ea37921b2ef2328ca62b4" Feb 02 13:29:39 crc kubenswrapper[4955]: I0202 13:29:39.151049 4955 scope.go:117] "RemoveContainer" containerID="0c399c966c77bb8d5912e2f2d8df59def2b63e84dea1b6395d56bf8303d46592" Feb 02 13:29:39 crc kubenswrapper[4955]: I0202 13:29:39.186220 4955 scope.go:117] "RemoveContainer" containerID="00debb421c851ec26b4de53a8c239f239fdcf9c8c21aa21c822475f375adefc7" Feb 02 13:29:42 crc kubenswrapper[4955]: I0202 13:29:42.822272 4955 generic.go:334] "Generic (PLEG): container finished" podID="34b65653-8bae-4ebf-a7c8-1de410bed9ac" containerID="5d0c5097639247159b5c021dde19fd59f70f29edd58dd82d91184f0c069f0f90" exitCode=0 Feb 02 13:29:42 crc kubenswrapper[4955]: I0202 13:29:42.822354 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nfqll" event={"ID":"34b65653-8bae-4ebf-a7c8-1de410bed9ac","Type":"ContainerDied","Data":"5d0c5097639247159b5c021dde19fd59f70f29edd58dd82d91184f0c069f0f90"} Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.247347 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nfqll" Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.406303 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34b65653-8bae-4ebf-a7c8-1de410bed9ac-inventory\") pod \"34b65653-8bae-4ebf-a7c8-1de410bed9ac\" (UID: \"34b65653-8bae-4ebf-a7c8-1de410bed9ac\") " Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.407197 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6566f\" (UniqueName: \"kubernetes.io/projected/34b65653-8bae-4ebf-a7c8-1de410bed9ac-kube-api-access-6566f\") pod \"34b65653-8bae-4ebf-a7c8-1de410bed9ac\" (UID: \"34b65653-8bae-4ebf-a7c8-1de410bed9ac\") " Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.407832 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34b65653-8bae-4ebf-a7c8-1de410bed9ac-ssh-key-openstack-edpm-ipam\") pod \"34b65653-8bae-4ebf-a7c8-1de410bed9ac\" (UID: \"34b65653-8bae-4ebf-a7c8-1de410bed9ac\") " Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.412483 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b65653-8bae-4ebf-a7c8-1de410bed9ac-kube-api-access-6566f" (OuterVolumeSpecName: "kube-api-access-6566f") pod "34b65653-8bae-4ebf-a7c8-1de410bed9ac" (UID: "34b65653-8bae-4ebf-a7c8-1de410bed9ac"). InnerVolumeSpecName "kube-api-access-6566f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.436545 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b65653-8bae-4ebf-a7c8-1de410bed9ac-inventory" (OuterVolumeSpecName: "inventory") pod "34b65653-8bae-4ebf-a7c8-1de410bed9ac" (UID: "34b65653-8bae-4ebf-a7c8-1de410bed9ac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.437696 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b65653-8bae-4ebf-a7c8-1de410bed9ac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "34b65653-8bae-4ebf-a7c8-1de410bed9ac" (UID: "34b65653-8bae-4ebf-a7c8-1de410bed9ac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.509264 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6566f\" (UniqueName: \"kubernetes.io/projected/34b65653-8bae-4ebf-a7c8-1de410bed9ac-kube-api-access-6566f\") on node \"crc\" DevicePath \"\"" Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.509298 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34b65653-8bae-4ebf-a7c8-1de410bed9ac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.509308 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34b65653-8bae-4ebf-a7c8-1de410bed9ac-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.840654 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nfqll" event={"ID":"34b65653-8bae-4ebf-a7c8-1de410bed9ac","Type":"ContainerDied","Data":"2b6de6e1fc54f56f73978ebbc4fea24ebd8b118cd2643883eb936cb14218014c"} Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.840893 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b6de6e1fc54f56f73978ebbc4fea24ebd8b118cd2643883eb936cb14218014c" Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.840724 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nfqll" Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.910102 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-28wwz"] Feb 02 13:29:44 crc kubenswrapper[4955]: E0202 13:29:44.910481 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b65653-8bae-4ebf-a7c8-1de410bed9ac" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.910506 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b65653-8bae-4ebf-a7c8-1de410bed9ac" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.910718 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b65653-8bae-4ebf-a7c8-1de410bed9ac" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.911294 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-28wwz" Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.913782 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-65wvh" Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.914108 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.914337 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.914425 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.916605 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c94p4\" (UniqueName: \"kubernetes.io/projected/9601eed5-d546-4700-b8a6-99a577f72612-kube-api-access-c94p4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-28wwz\" (UID: \"9601eed5-d546-4700-b8a6-99a577f72612\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-28wwz" Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.916673 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9601eed5-d546-4700-b8a6-99a577f72612-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-28wwz\" (UID: \"9601eed5-d546-4700-b8a6-99a577f72612\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-28wwz" Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.916718 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9601eed5-d546-4700-b8a6-99a577f72612-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-28wwz\" (UID: \"9601eed5-d546-4700-b8a6-99a577f72612\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-28wwz" Feb 02 13:29:44 crc kubenswrapper[4955]: I0202 13:29:44.924916 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-28wwz"] Feb 02 13:29:45 crc kubenswrapper[4955]: I0202 13:29:45.018494 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c94p4\" (UniqueName: \"kubernetes.io/projected/9601eed5-d546-4700-b8a6-99a577f72612-kube-api-access-c94p4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-28wwz\" (UID: \"9601eed5-d546-4700-b8a6-99a577f72612\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-28wwz" Feb 02 13:29:45 crc kubenswrapper[4955]: I0202 13:29:45.018587 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9601eed5-d546-4700-b8a6-99a577f72612-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-28wwz\" (UID: \"9601eed5-d546-4700-b8a6-99a577f72612\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-28wwz" Feb 02 13:29:45 crc kubenswrapper[4955]: I0202 13:29:45.018639 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9601eed5-d546-4700-b8a6-99a577f72612-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-28wwz\" (UID: \"9601eed5-d546-4700-b8a6-99a577f72612\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-28wwz" Feb 02 13:29:45 crc kubenswrapper[4955]: I0202 13:29:45.023107 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9601eed5-d546-4700-b8a6-99a577f72612-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-28wwz\" (UID: \"9601eed5-d546-4700-b8a6-99a577f72612\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-28wwz" Feb 02 13:29:45 crc kubenswrapper[4955]: I0202 13:29:45.024349 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9601eed5-d546-4700-b8a6-99a577f72612-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-28wwz\" (UID: \"9601eed5-d546-4700-b8a6-99a577f72612\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-28wwz" Feb 02 13:29:45 crc kubenswrapper[4955]: I0202 13:29:45.034863 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c94p4\" (UniqueName: \"kubernetes.io/projected/9601eed5-d546-4700-b8a6-99a577f72612-kube-api-access-c94p4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-28wwz\" (UID: \"9601eed5-d546-4700-b8a6-99a577f72612\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-28wwz" Feb 02 13:29:45 crc kubenswrapper[4955]: I0202 13:29:45.227793 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-28wwz" Feb 02 13:29:45 crc kubenswrapper[4955]: W0202 13:29:45.755887 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9601eed5_d546_4700_b8a6_99a577f72612.slice/crio-51f210e6f8ba04697d55725bda3cc4ebf664d97de1f2da02897773320334c62d WatchSource:0}: Error finding container 51f210e6f8ba04697d55725bda3cc4ebf664d97de1f2da02897773320334c62d: Status 404 returned error can't find the container with id 51f210e6f8ba04697d55725bda3cc4ebf664d97de1f2da02897773320334c62d Feb 02 13:29:45 crc kubenswrapper[4955]: I0202 13:29:45.763613 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-28wwz"] Feb 02 13:29:45 crc kubenswrapper[4955]: I0202 13:29:45.850757 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-28wwz" event={"ID":"9601eed5-d546-4700-b8a6-99a577f72612","Type":"ContainerStarted","Data":"51f210e6f8ba04697d55725bda3cc4ebf664d97de1f2da02897773320334c62d"} Feb 02 13:29:46 crc kubenswrapper[4955]: I0202 13:29:46.860788 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-28wwz" event={"ID":"9601eed5-d546-4700-b8a6-99a577f72612","Type":"ContainerStarted","Data":"c933fdf0d5ee7004ef914a41141a4c1c1369953a6d33175c2fdecf2526e9b12b"} Feb 02 13:29:46 crc kubenswrapper[4955]: I0202 13:29:46.880975 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-28wwz" podStartSLOduration=2.2847584 podStartE2EDuration="2.880958217s" podCreationTimestamp="2026-02-02 13:29:44 +0000 UTC" firstStartedPulling="2026-02-02 13:29:45.757646933 +0000 UTC m=+1636.669983383" lastFinishedPulling="2026-02-02 13:29:46.35384675 +0000 UTC m=+1637.266183200" observedRunningTime="2026-02-02 13:29:46.876727121 +0000 UTC m=+1637.789063571" watchObservedRunningTime="2026-02-02 13:29:46.880958217 +0000 UTC m=+1637.793294667" Feb 02 13:29:51 crc kubenswrapper[4955]: I0202 13:29:51.716985 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:29:51 crc kubenswrapper[4955]: E0202 13:29:51.717465 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:30:00 crc kubenswrapper[4955]: I0202 13:30:00.147980 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500650-pqhbr"] Feb 02 13:30:00 crc kubenswrapper[4955]: I0202 13:30:00.150714 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-pqhbr" Feb 02 13:30:00 crc kubenswrapper[4955]: I0202 13:30:00.153098 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 13:30:00 crc kubenswrapper[4955]: I0202 13:30:00.153437 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 13:30:00 crc kubenswrapper[4955]: I0202 13:30:00.164321 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500650-pqhbr"] Feb 02 13:30:00 crc kubenswrapper[4955]: I0202 13:30:00.305752 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzjjf\" (UniqueName: \"kubernetes.io/projected/ecb01763-3738-4b2b-aa08-100db54e1312-kube-api-access-vzjjf\") pod \"collect-profiles-29500650-pqhbr\" (UID: \"ecb01763-3738-4b2b-aa08-100db54e1312\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-pqhbr" Feb 02 13:30:00 crc kubenswrapper[4955]: I0202 13:30:00.305840 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecb01763-3738-4b2b-aa08-100db54e1312-config-volume\") pod \"collect-profiles-29500650-pqhbr\" (UID: \"ecb01763-3738-4b2b-aa08-100db54e1312\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-pqhbr" Feb 02 13:30:00 crc kubenswrapper[4955]: I0202 13:30:00.306154 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecb01763-3738-4b2b-aa08-100db54e1312-secret-volume\") pod \"collect-profiles-29500650-pqhbr\" (UID: \"ecb01763-3738-4b2b-aa08-100db54e1312\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-pqhbr" Feb 02 13:30:00 crc kubenswrapper[4955]: I0202 13:30:00.407820 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzjjf\" (UniqueName: \"kubernetes.io/projected/ecb01763-3738-4b2b-aa08-100db54e1312-kube-api-access-vzjjf\") pod \"collect-profiles-29500650-pqhbr\" (UID: \"ecb01763-3738-4b2b-aa08-100db54e1312\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-pqhbr" Feb 02 13:30:00 crc kubenswrapper[4955]: I0202 13:30:00.407895 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecb01763-3738-4b2b-aa08-100db54e1312-config-volume\") pod \"collect-profiles-29500650-pqhbr\" (UID: \"ecb01763-3738-4b2b-aa08-100db54e1312\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-pqhbr" Feb 02 13:30:00 crc kubenswrapper[4955]: I0202 13:30:00.407971 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecb01763-3738-4b2b-aa08-100db54e1312-secret-volume\") pod \"collect-profiles-29500650-pqhbr\" (UID: \"ecb01763-3738-4b2b-aa08-100db54e1312\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-pqhbr" Feb 02 13:30:00 crc kubenswrapper[4955]: I0202 13:30:00.409115 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecb01763-3738-4b2b-aa08-100db54e1312-config-volume\") pod \"collect-profiles-29500650-pqhbr\" (UID: \"ecb01763-3738-4b2b-aa08-100db54e1312\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-pqhbr" Feb 02 13:30:00 crc kubenswrapper[4955]: I0202 13:30:00.420695 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecb01763-3738-4b2b-aa08-100db54e1312-secret-volume\") pod \"collect-profiles-29500650-pqhbr\" (UID: \"ecb01763-3738-4b2b-aa08-100db54e1312\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-pqhbr" Feb 02 13:30:00 crc kubenswrapper[4955]: I0202 13:30:00.426449 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzjjf\" (UniqueName: \"kubernetes.io/projected/ecb01763-3738-4b2b-aa08-100db54e1312-kube-api-access-vzjjf\") pod \"collect-profiles-29500650-pqhbr\" (UID: \"ecb01763-3738-4b2b-aa08-100db54e1312\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-pqhbr" Feb 02 13:30:00 crc kubenswrapper[4955]: I0202 13:30:00.483425 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-pqhbr" Feb 02 13:30:00 crc kubenswrapper[4955]: I0202 13:30:00.911163 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500650-pqhbr"] Feb 02 13:30:00 crc kubenswrapper[4955]: I0202 13:30:00.970736 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-pqhbr" event={"ID":"ecb01763-3738-4b2b-aa08-100db54e1312","Type":"ContainerStarted","Data":"4e9e4ac1bb6ef0781824c84b2dfc9af5292c33f4eb8a744039d0980b4f6f5f17"} Feb 02 13:30:01 crc kubenswrapper[4955]: I0202 13:30:01.979873 4955 generic.go:334] "Generic (PLEG): container finished" podID="ecb01763-3738-4b2b-aa08-100db54e1312" containerID="53276c73660f5b0424a897edbe1ef3860a1bc430ed99648b95fb48a22fafb97e" exitCode=0 Feb 02 13:30:01 crc kubenswrapper[4955]: I0202 13:30:01.979970 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-pqhbr" event={"ID":"ecb01763-3738-4b2b-aa08-100db54e1312","Type":"ContainerDied","Data":"53276c73660f5b0424a897edbe1ef3860a1bc430ed99648b95fb48a22fafb97e"} Feb 02 13:30:05 crc kubenswrapper[4955]: I0202 13:30:03.332888 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-pqhbr" Feb 02 13:30:05 crc kubenswrapper[4955]: I0202 13:30:03.469743 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecb01763-3738-4b2b-aa08-100db54e1312-config-volume\") pod \"ecb01763-3738-4b2b-aa08-100db54e1312\" (UID: \"ecb01763-3738-4b2b-aa08-100db54e1312\") " Feb 02 13:30:05 crc kubenswrapper[4955]: I0202 13:30:03.469844 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzjjf\" (UniqueName: \"kubernetes.io/projected/ecb01763-3738-4b2b-aa08-100db54e1312-kube-api-access-vzjjf\") pod \"ecb01763-3738-4b2b-aa08-100db54e1312\" (UID: \"ecb01763-3738-4b2b-aa08-100db54e1312\") " Feb 02 13:30:05 crc kubenswrapper[4955]: I0202 13:30:03.469933 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecb01763-3738-4b2b-aa08-100db54e1312-secret-volume\") pod \"ecb01763-3738-4b2b-aa08-100db54e1312\" (UID: \"ecb01763-3738-4b2b-aa08-100db54e1312\") " Feb 02 13:30:05 crc kubenswrapper[4955]: I0202 13:30:03.471501 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecb01763-3738-4b2b-aa08-100db54e1312-config-volume" (OuterVolumeSpecName: "config-volume") pod "ecb01763-3738-4b2b-aa08-100db54e1312" (UID: "ecb01763-3738-4b2b-aa08-100db54e1312"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:30:05 crc kubenswrapper[4955]: I0202 13:30:03.477369 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecb01763-3738-4b2b-aa08-100db54e1312-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ecb01763-3738-4b2b-aa08-100db54e1312" (UID: "ecb01763-3738-4b2b-aa08-100db54e1312"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:30:05 crc kubenswrapper[4955]: I0202 13:30:03.479594 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecb01763-3738-4b2b-aa08-100db54e1312-kube-api-access-vzjjf" (OuterVolumeSpecName: "kube-api-access-vzjjf") pod "ecb01763-3738-4b2b-aa08-100db54e1312" (UID: "ecb01763-3738-4b2b-aa08-100db54e1312"). InnerVolumeSpecName "kube-api-access-vzjjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:30:05 crc kubenswrapper[4955]: I0202 13:30:03.572103 4955 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecb01763-3738-4b2b-aa08-100db54e1312-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 13:30:05 crc kubenswrapper[4955]: I0202 13:30:03.572141 4955 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecb01763-3738-4b2b-aa08-100db54e1312-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 13:30:05 crc kubenswrapper[4955]: I0202 13:30:03.572155 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzjjf\" (UniqueName: \"kubernetes.io/projected/ecb01763-3738-4b2b-aa08-100db54e1312-kube-api-access-vzjjf\") on node \"crc\" DevicePath \"\"" Feb 02 13:30:05 crc kubenswrapper[4955]: I0202 13:30:03.997930 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-pqhbr" event={"ID":"ecb01763-3738-4b2b-aa08-100db54e1312","Type":"ContainerDied","Data":"4e9e4ac1bb6ef0781824c84b2dfc9af5292c33f4eb8a744039d0980b4f6f5f17"} Feb 02 13:30:05 crc kubenswrapper[4955]: I0202 13:30:03.997965 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e9e4ac1bb6ef0781824c84b2dfc9af5292c33f4eb8a744039d0980b4f6f5f17" Feb 02 13:30:05 crc kubenswrapper[4955]: I0202 13:30:03.997976 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-pqhbr" Feb 02 13:30:05 crc kubenswrapper[4955]: I0202 13:30:05.049307 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-mrzb9"] Feb 02 13:30:05 crc kubenswrapper[4955]: I0202 13:30:05.059571 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-be3f-account-create-update-xvcsp"] Feb 02 13:30:05 crc kubenswrapper[4955]: I0202 13:30:05.073426 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-mrzb9"] Feb 02 13:30:05 crc kubenswrapper[4955]: I0202 13:30:05.083437 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-be3f-account-create-update-xvcsp"] Feb 02 13:30:05 crc kubenswrapper[4955]: I0202 13:30:05.728970 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8faf7286-4893-432d-bbe7-a431158357f9" path="/var/lib/kubelet/pods/8faf7286-4893-432d-bbe7-a431158357f9/volumes" Feb 02 13:30:05 crc kubenswrapper[4955]: I0202 13:30:05.729543 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5fb8423-7ae4-4515-8920-72d90de48d8e" path="/var/lib/kubelet/pods/b5fb8423-7ae4-4515-8920-72d90de48d8e/volumes" Feb 02 13:30:06 crc kubenswrapper[4955]: I0202 13:30:06.037757 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-afac-account-create-update-kpmqz"] Feb 02 13:30:06 crc kubenswrapper[4955]: I0202 13:30:06.045551 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-q76t2"] Feb 02 13:30:06 crc kubenswrapper[4955]: I0202 13:30:06.053051 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c3ee-account-create-update-7nl28"] Feb 02 13:30:06 crc kubenswrapper[4955]: I0202 13:30:06.063112 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-75hh6"] Feb 02 13:30:06 crc kubenswrapper[4955]: I0202 13:30:06.071081 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-q76t2"] Feb 02 13:30:06 crc kubenswrapper[4955]: I0202 13:30:06.078401 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-75hh6"] Feb 02 13:30:06 crc kubenswrapper[4955]: I0202 13:30:06.086229 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-afac-account-create-update-kpmqz"] Feb 02 13:30:06 crc kubenswrapper[4955]: I0202 13:30:06.093911 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-c3ee-account-create-update-7nl28"] Feb 02 13:30:06 crc kubenswrapper[4955]: I0202 13:30:06.716174 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:30:06 crc kubenswrapper[4955]: E0202 13:30:06.716415 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:30:07 crc kubenswrapper[4955]: I0202 13:30:07.727127 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a17c62e-18bb-4a12-9865-8d38c0b7102f" path="/var/lib/kubelet/pods/2a17c62e-18bb-4a12-9865-8d38c0b7102f/volumes" Feb 02 13:30:07 crc kubenswrapper[4955]: I0202 13:30:07.728280 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67459792-2667-4acf-9ce1-6b715ce15a98" path="/var/lib/kubelet/pods/67459792-2667-4acf-9ce1-6b715ce15a98/volumes" Feb 02 13:30:07 crc kubenswrapper[4955]: I0202 13:30:07.728910 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98ea679e-a44a-4bd0-867e-044542b96bbb" path="/var/lib/kubelet/pods/98ea679e-a44a-4bd0-867e-044542b96bbb/volumes" Feb 02 13:30:07 crc kubenswrapper[4955]: I0202 13:30:07.729507 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6eff7b8-c700-48bd-b71b-0343fca61cc4" path="/var/lib/kubelet/pods/c6eff7b8-c700-48bd-b71b-0343fca61cc4/volumes" Feb 02 13:30:21 crc kubenswrapper[4955]: I0202 13:30:21.136341 4955 generic.go:334] "Generic (PLEG): container finished" podID="9601eed5-d546-4700-b8a6-99a577f72612" containerID="c933fdf0d5ee7004ef914a41141a4c1c1369953a6d33175c2fdecf2526e9b12b" exitCode=0 Feb 02 13:30:21 crc kubenswrapper[4955]: I0202 13:30:21.136443 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-28wwz" event={"ID":"9601eed5-d546-4700-b8a6-99a577f72612","Type":"ContainerDied","Data":"c933fdf0d5ee7004ef914a41141a4c1c1369953a6d33175c2fdecf2526e9b12b"} Feb 02 13:30:21 crc kubenswrapper[4955]: I0202 13:30:21.717033 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:30:21 crc kubenswrapper[4955]: E0202 13:30:21.717622 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:30:22 crc kubenswrapper[4955]: I0202 13:30:22.610015 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-28wwz" Feb 02 13:30:22 crc kubenswrapper[4955]: I0202 13:30:22.749074 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c94p4\" (UniqueName: \"kubernetes.io/projected/9601eed5-d546-4700-b8a6-99a577f72612-kube-api-access-c94p4\") pod \"9601eed5-d546-4700-b8a6-99a577f72612\" (UID: \"9601eed5-d546-4700-b8a6-99a577f72612\") " Feb 02 13:30:22 crc kubenswrapper[4955]: I0202 13:30:22.750220 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9601eed5-d546-4700-b8a6-99a577f72612-ssh-key-openstack-edpm-ipam\") pod \"9601eed5-d546-4700-b8a6-99a577f72612\" (UID: \"9601eed5-d546-4700-b8a6-99a577f72612\") " Feb 02 13:30:22 crc kubenswrapper[4955]: I0202 13:30:22.750349 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9601eed5-d546-4700-b8a6-99a577f72612-inventory\") pod \"9601eed5-d546-4700-b8a6-99a577f72612\" (UID: \"9601eed5-d546-4700-b8a6-99a577f72612\") " Feb 02 13:30:22 crc kubenswrapper[4955]: I0202 13:30:22.755588 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9601eed5-d546-4700-b8a6-99a577f72612-kube-api-access-c94p4" (OuterVolumeSpecName: "kube-api-access-c94p4") pod "9601eed5-d546-4700-b8a6-99a577f72612" (UID: "9601eed5-d546-4700-b8a6-99a577f72612"). InnerVolumeSpecName "kube-api-access-c94p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:30:22 crc kubenswrapper[4955]: I0202 13:30:22.779691 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9601eed5-d546-4700-b8a6-99a577f72612-inventory" (OuterVolumeSpecName: "inventory") pod "9601eed5-d546-4700-b8a6-99a577f72612" (UID: "9601eed5-d546-4700-b8a6-99a577f72612"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:30:22 crc kubenswrapper[4955]: I0202 13:30:22.794674 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9601eed5-d546-4700-b8a6-99a577f72612-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9601eed5-d546-4700-b8a6-99a577f72612" (UID: "9601eed5-d546-4700-b8a6-99a577f72612"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:30:22 crc kubenswrapper[4955]: I0202 13:30:22.852854 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9601eed5-d546-4700-b8a6-99a577f72612-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 13:30:22 crc kubenswrapper[4955]: I0202 13:30:22.852893 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9601eed5-d546-4700-b8a6-99a577f72612-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 13:30:22 crc kubenswrapper[4955]: I0202 13:30:22.852906 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c94p4\" (UniqueName: \"kubernetes.io/projected/9601eed5-d546-4700-b8a6-99a577f72612-kube-api-access-c94p4\") on node \"crc\" DevicePath \"\"" Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.158717 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-28wwz" event={"ID":"9601eed5-d546-4700-b8a6-99a577f72612","Type":"ContainerDied","Data":"51f210e6f8ba04697d55725bda3cc4ebf664d97de1f2da02897773320334c62d"} Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.158753 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51f210e6f8ba04697d55725bda3cc4ebf664d97de1f2da02897773320334c62d" Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.158765 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-28wwz" Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.253937 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9288j"] Feb 02 13:30:23 crc kubenswrapper[4955]: E0202 13:30:23.254272 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb01763-3738-4b2b-aa08-100db54e1312" containerName="collect-profiles" Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.254287 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb01763-3738-4b2b-aa08-100db54e1312" containerName="collect-profiles" Feb 02 13:30:23 crc kubenswrapper[4955]: E0202 13:30:23.254302 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9601eed5-d546-4700-b8a6-99a577f72612" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.254311 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9601eed5-d546-4700-b8a6-99a577f72612" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.254504 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="9601eed5-d546-4700-b8a6-99a577f72612" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.254527 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb01763-3738-4b2b-aa08-100db54e1312" containerName="collect-profiles" Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.255096 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9288j" Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.259732 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.260127 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.260376 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.260380 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-65wvh" Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.261310 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d22bd39-e9c6-456d-98ea-5adcc4e3aa64-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9288j\" (UID: \"4d22bd39-e9c6-456d-98ea-5adcc4e3aa64\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9288j" Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.261366 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d22bd39-e9c6-456d-98ea-5adcc4e3aa64-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9288j\" (UID: \"4d22bd39-e9c6-456d-98ea-5adcc4e3aa64\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9288j" Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.261441 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t87dv\" (UniqueName: \"kubernetes.io/projected/4d22bd39-e9c6-456d-98ea-5adcc4e3aa64-kube-api-access-t87dv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9288j\" (UID: \"4d22bd39-e9c6-456d-98ea-5adcc4e3aa64\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9288j" Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.270177 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9288j"] Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.361933 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t87dv\" (UniqueName: \"kubernetes.io/projected/4d22bd39-e9c6-456d-98ea-5adcc4e3aa64-kube-api-access-t87dv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9288j\" (UID: \"4d22bd39-e9c6-456d-98ea-5adcc4e3aa64\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9288j" Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.362055 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d22bd39-e9c6-456d-98ea-5adcc4e3aa64-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9288j\" (UID: \"4d22bd39-e9c6-456d-98ea-5adcc4e3aa64\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9288j" Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.362110 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d22bd39-e9c6-456d-98ea-5adcc4e3aa64-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9288j\" (UID: \"4d22bd39-e9c6-456d-98ea-5adcc4e3aa64\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9288j" Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.366130 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d22bd39-e9c6-456d-98ea-5adcc4e3aa64-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9288j\" (UID: \"4d22bd39-e9c6-456d-98ea-5adcc4e3aa64\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9288j" Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.375257 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d22bd39-e9c6-456d-98ea-5adcc4e3aa64-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9288j\" (UID: \"4d22bd39-e9c6-456d-98ea-5adcc4e3aa64\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9288j" Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.378590 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t87dv\" (UniqueName: \"kubernetes.io/projected/4d22bd39-e9c6-456d-98ea-5adcc4e3aa64-kube-api-access-t87dv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9288j\" (UID: \"4d22bd39-e9c6-456d-98ea-5adcc4e3aa64\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9288j" Feb 02 13:30:23 crc kubenswrapper[4955]: I0202 13:30:23.571573 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9288j" Feb 02 13:30:24 crc kubenswrapper[4955]: I0202 13:30:24.113313 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9288j"] Feb 02 13:30:24 crc kubenswrapper[4955]: I0202 13:30:24.166926 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9288j" event={"ID":"4d22bd39-e9c6-456d-98ea-5adcc4e3aa64","Type":"ContainerStarted","Data":"7f0ce7beb652e6e22a193619f409475383d255eac067a33becd31c8274f30512"} Feb 02 13:30:25 crc kubenswrapper[4955]: I0202 13:30:25.176729 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9288j" event={"ID":"4d22bd39-e9c6-456d-98ea-5adcc4e3aa64","Type":"ContainerStarted","Data":"7bb183ea774d35164fd7cde090b64ab75eebbc09545305db0c14c7c01084e92c"} Feb 02 13:30:25 crc kubenswrapper[4955]: I0202 13:30:25.199356 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9288j" podStartSLOduration=1.509173175 podStartE2EDuration="2.199339434s" podCreationTimestamp="2026-02-02 13:30:23 +0000 UTC" firstStartedPulling="2026-02-02 13:30:24.132978896 +0000 UTC m=+1675.045315346" lastFinishedPulling="2026-02-02 13:30:24.823145155 +0000 UTC m=+1675.735481605" observedRunningTime="2026-02-02 13:30:25.195946279 +0000 UTC m=+1676.108282729" watchObservedRunningTime="2026-02-02 13:30:25.199339434 +0000 UTC m=+1676.111675884" Feb 02 13:30:36 crc kubenswrapper[4955]: I0202 13:30:36.716878 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:30:36 crc kubenswrapper[4955]: E0202 13:30:36.717610 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:30:39 crc kubenswrapper[4955]: I0202 13:30:39.386749 4955 scope.go:117] "RemoveContainer" containerID="8eefa9e33a962720d297e3051d63198af9ed29a47c4b1160460e30af54999844" Feb 02 13:30:39 crc kubenswrapper[4955]: I0202 13:30:39.414731 4955 scope.go:117] "RemoveContainer" containerID="2b2ed410488de04f9007269d824389f9cda8924c0c10470cdadda4b729e4f376" Feb 02 13:30:39 crc kubenswrapper[4955]: I0202 13:30:39.476434 4955 scope.go:117] "RemoveContainer" containerID="61f5984eed4d359d9f82a92f3bac2910a804f3e8a970b0840e4a9dddd450a4d9" Feb 02 13:30:39 crc kubenswrapper[4955]: I0202 13:30:39.526407 4955 scope.go:117] "RemoveContainer" containerID="799246176d535c7b83e407a410833aabb1c68f0571ed47c263bea4e75b9fffa2" Feb 02 13:30:39 crc kubenswrapper[4955]: I0202 13:30:39.574616 4955 scope.go:117] "RemoveContainer" containerID="54ae6550e82a61884ff85589327d624190bc43eca56aeaa92e6ab41ee52879c3" Feb 02 13:30:39 crc kubenswrapper[4955]: I0202 13:30:39.626326 4955 scope.go:117] "RemoveContainer" containerID="0cac018f6ddd7d5d41b52cb36a7aa0e9c4156c9f80cf94646cd6290efd35ed0e" Feb 02 13:30:51 crc kubenswrapper[4955]: I0202 13:30:51.716636 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:30:51 crc kubenswrapper[4955]: E0202 13:30:51.717305 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:30:58 crc kubenswrapper[4955]: I0202 13:30:58.045123 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8vmrp"] Feb 02 13:30:58 crc kubenswrapper[4955]: I0202 13:30:58.052613 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8vmrp"] Feb 02 13:30:59 crc kubenswrapper[4955]: I0202 13:30:59.728517 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72065269-3b09-46d2-a98d-00f4f38d40a1" path="/var/lib/kubelet/pods/72065269-3b09-46d2-a98d-00f4f38d40a1/volumes" Feb 02 13:31:03 crc kubenswrapper[4955]: I0202 13:31:03.717263 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:31:03 crc kubenswrapper[4955]: E0202 13:31:03.718033 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:31:09 crc kubenswrapper[4955]: I0202 13:31:09.554022 4955 generic.go:334] "Generic (PLEG): container finished" podID="4d22bd39-e9c6-456d-98ea-5adcc4e3aa64" containerID="7bb183ea774d35164fd7cde090b64ab75eebbc09545305db0c14c7c01084e92c" exitCode=0 Feb 02 13:31:09 crc kubenswrapper[4955]: I0202 13:31:09.554102 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9288j" event={"ID":"4d22bd39-e9c6-456d-98ea-5adcc4e3aa64","Type":"ContainerDied","Data":"7bb183ea774d35164fd7cde090b64ab75eebbc09545305db0c14c7c01084e92c"} Feb 02 13:31:10 crc kubenswrapper[4955]: I0202 13:31:10.984375 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9288j" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.075544 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t87dv\" (UniqueName: \"kubernetes.io/projected/4d22bd39-e9c6-456d-98ea-5adcc4e3aa64-kube-api-access-t87dv\") pod \"4d22bd39-e9c6-456d-98ea-5adcc4e3aa64\" (UID: \"4d22bd39-e9c6-456d-98ea-5adcc4e3aa64\") " Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.075708 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d22bd39-e9c6-456d-98ea-5adcc4e3aa64-inventory\") pod \"4d22bd39-e9c6-456d-98ea-5adcc4e3aa64\" (UID: \"4d22bd39-e9c6-456d-98ea-5adcc4e3aa64\") " Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.075918 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d22bd39-e9c6-456d-98ea-5adcc4e3aa64-ssh-key-openstack-edpm-ipam\") pod \"4d22bd39-e9c6-456d-98ea-5adcc4e3aa64\" (UID: \"4d22bd39-e9c6-456d-98ea-5adcc4e3aa64\") " Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.081253 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d22bd39-e9c6-456d-98ea-5adcc4e3aa64-kube-api-access-t87dv" (OuterVolumeSpecName: "kube-api-access-t87dv") pod "4d22bd39-e9c6-456d-98ea-5adcc4e3aa64" (UID: "4d22bd39-e9c6-456d-98ea-5adcc4e3aa64"). InnerVolumeSpecName "kube-api-access-t87dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.102345 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d22bd39-e9c6-456d-98ea-5adcc4e3aa64-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4d22bd39-e9c6-456d-98ea-5adcc4e3aa64" (UID: "4d22bd39-e9c6-456d-98ea-5adcc4e3aa64"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.103347 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d22bd39-e9c6-456d-98ea-5adcc4e3aa64-inventory" (OuterVolumeSpecName: "inventory") pod "4d22bd39-e9c6-456d-98ea-5adcc4e3aa64" (UID: "4d22bd39-e9c6-456d-98ea-5adcc4e3aa64"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.177549 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d22bd39-e9c6-456d-98ea-5adcc4e3aa64-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.177593 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t87dv\" (UniqueName: \"kubernetes.io/projected/4d22bd39-e9c6-456d-98ea-5adcc4e3aa64-kube-api-access-t87dv\") on node \"crc\" DevicePath \"\"" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.177603 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d22bd39-e9c6-456d-98ea-5adcc4e3aa64-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.571573 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9288j" event={"ID":"4d22bd39-e9c6-456d-98ea-5adcc4e3aa64","Type":"ContainerDied","Data":"7f0ce7beb652e6e22a193619f409475383d255eac067a33becd31c8274f30512"} Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.571879 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f0ce7beb652e6e22a193619f409475383d255eac067a33becd31c8274f30512" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.571691 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9288j" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.740967 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-657gr"] Feb 02 13:31:11 crc kubenswrapper[4955]: E0202 13:31:11.741388 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d22bd39-e9c6-456d-98ea-5adcc4e3aa64" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.741418 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d22bd39-e9c6-456d-98ea-5adcc4e3aa64" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.741637 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d22bd39-e9c6-456d-98ea-5adcc4e3aa64" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.742322 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-657gr" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.744390 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.744723 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.745047 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.745055 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-65wvh" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.760806 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-657gr"] Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.791827 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5e4929a1-70d3-4e76-a561-c35bfc07b562-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-657gr\" (UID: \"5e4929a1-70d3-4e76-a561-c35bfc07b562\") " pod="openstack/ssh-known-hosts-edpm-deployment-657gr" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.792068 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e4929a1-70d3-4e76-a561-c35bfc07b562-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-657gr\" (UID: \"5e4929a1-70d3-4e76-a561-c35bfc07b562\") " pod="openstack/ssh-known-hosts-edpm-deployment-657gr" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.792165 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x9wz\" (UniqueName: \"kubernetes.io/projected/5e4929a1-70d3-4e76-a561-c35bfc07b562-kube-api-access-9x9wz\") pod \"ssh-known-hosts-edpm-deployment-657gr\" (UID: \"5e4929a1-70d3-4e76-a561-c35bfc07b562\") " pod="openstack/ssh-known-hosts-edpm-deployment-657gr" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.894449 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5e4929a1-70d3-4e76-a561-c35bfc07b562-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-657gr\" (UID: \"5e4929a1-70d3-4e76-a561-c35bfc07b562\") " pod="openstack/ssh-known-hosts-edpm-deployment-657gr" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.894531 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e4929a1-70d3-4e76-a561-c35bfc07b562-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-657gr\" (UID: \"5e4929a1-70d3-4e76-a561-c35bfc07b562\") " pod="openstack/ssh-known-hosts-edpm-deployment-657gr" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.894622 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x9wz\" (UniqueName: \"kubernetes.io/projected/5e4929a1-70d3-4e76-a561-c35bfc07b562-kube-api-access-9x9wz\") pod \"ssh-known-hosts-edpm-deployment-657gr\" (UID: \"5e4929a1-70d3-4e76-a561-c35bfc07b562\") " pod="openstack/ssh-known-hosts-edpm-deployment-657gr" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.901400 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e4929a1-70d3-4e76-a561-c35bfc07b562-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-657gr\" (UID: \"5e4929a1-70d3-4e76-a561-c35bfc07b562\") " pod="openstack/ssh-known-hosts-edpm-deployment-657gr" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.906405 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5e4929a1-70d3-4e76-a561-c35bfc07b562-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-657gr\" (UID: \"5e4929a1-70d3-4e76-a561-c35bfc07b562\") " pod="openstack/ssh-known-hosts-edpm-deployment-657gr" Feb 02 13:31:11 crc kubenswrapper[4955]: I0202 13:31:11.910918 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x9wz\" (UniqueName: \"kubernetes.io/projected/5e4929a1-70d3-4e76-a561-c35bfc07b562-kube-api-access-9x9wz\") pod \"ssh-known-hosts-edpm-deployment-657gr\" (UID: \"5e4929a1-70d3-4e76-a561-c35bfc07b562\") " pod="openstack/ssh-known-hosts-edpm-deployment-657gr" Feb 02 13:31:12 crc kubenswrapper[4955]: I0202 13:31:12.061063 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-657gr" Feb 02 13:31:12 crc kubenswrapper[4955]: I0202 13:31:12.553194 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-657gr"] Feb 02 13:31:12 crc kubenswrapper[4955]: I0202 13:31:12.589998 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-657gr" event={"ID":"5e4929a1-70d3-4e76-a561-c35bfc07b562","Type":"ContainerStarted","Data":"eecda434d8abce33d8ef544f4d1c7659a9c5e5b06a357b3b5a377cebced83a7b"} Feb 02 13:31:13 crc kubenswrapper[4955]: I0202 13:31:13.597969 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-657gr" event={"ID":"5e4929a1-70d3-4e76-a561-c35bfc07b562","Type":"ContainerStarted","Data":"2b382234ace3f529ac5c04dd238ee04c7da879fa71864fb2763a297ce17cd0e4"} Feb 02 13:31:13 crc kubenswrapper[4955]: I0202 13:31:13.623119 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-657gr" podStartSLOduration=2.097785865 podStartE2EDuration="2.623092817s" podCreationTimestamp="2026-02-02 13:31:11 +0000 UTC" firstStartedPulling="2026-02-02 13:31:12.557758355 +0000 UTC m=+1723.470094805" lastFinishedPulling="2026-02-02 13:31:13.083065307 +0000 UTC m=+1723.995401757" observedRunningTime="2026-02-02 13:31:13.615397874 +0000 UTC m=+1724.527734324" watchObservedRunningTime="2026-02-02 13:31:13.623092817 +0000 UTC m=+1724.535429267" Feb 02 13:31:16 crc kubenswrapper[4955]: I0202 13:31:16.717492 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:31:16 crc kubenswrapper[4955]: E0202 13:31:16.718204 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:31:19 crc kubenswrapper[4955]: I0202 13:31:19.665951 4955 generic.go:334] "Generic (PLEG): container finished" podID="5e4929a1-70d3-4e76-a561-c35bfc07b562" containerID="2b382234ace3f529ac5c04dd238ee04c7da879fa71864fb2763a297ce17cd0e4" exitCode=0 Feb 02 13:31:19 crc kubenswrapper[4955]: I0202 13:31:19.666275 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-657gr" event={"ID":"5e4929a1-70d3-4e76-a561-c35bfc07b562","Type":"ContainerDied","Data":"2b382234ace3f529ac5c04dd238ee04c7da879fa71864fb2763a297ce17cd0e4"} Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.036671 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-dxm48"] Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.047838 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-dxm48"] Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.142474 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-657gr" Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.277590 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x9wz\" (UniqueName: \"kubernetes.io/projected/5e4929a1-70d3-4e76-a561-c35bfc07b562-kube-api-access-9x9wz\") pod \"5e4929a1-70d3-4e76-a561-c35bfc07b562\" (UID: \"5e4929a1-70d3-4e76-a561-c35bfc07b562\") " Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.277836 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5e4929a1-70d3-4e76-a561-c35bfc07b562-inventory-0\") pod \"5e4929a1-70d3-4e76-a561-c35bfc07b562\" (UID: \"5e4929a1-70d3-4e76-a561-c35bfc07b562\") " Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.277867 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e4929a1-70d3-4e76-a561-c35bfc07b562-ssh-key-openstack-edpm-ipam\") pod \"5e4929a1-70d3-4e76-a561-c35bfc07b562\" (UID: \"5e4929a1-70d3-4e76-a561-c35bfc07b562\") " Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.283726 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e4929a1-70d3-4e76-a561-c35bfc07b562-kube-api-access-9x9wz" (OuterVolumeSpecName: "kube-api-access-9x9wz") pod "5e4929a1-70d3-4e76-a561-c35bfc07b562" (UID: "5e4929a1-70d3-4e76-a561-c35bfc07b562"). InnerVolumeSpecName "kube-api-access-9x9wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.310702 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e4929a1-70d3-4e76-a561-c35bfc07b562-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "5e4929a1-70d3-4e76-a561-c35bfc07b562" (UID: "5e4929a1-70d3-4e76-a561-c35bfc07b562"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.314511 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e4929a1-70d3-4e76-a561-c35bfc07b562-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5e4929a1-70d3-4e76-a561-c35bfc07b562" (UID: "5e4929a1-70d3-4e76-a561-c35bfc07b562"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.381058 4955 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5e4929a1-70d3-4e76-a561-c35bfc07b562-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.381105 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e4929a1-70d3-4e76-a561-c35bfc07b562-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.381119 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x9wz\" (UniqueName: \"kubernetes.io/projected/5e4929a1-70d3-4e76-a561-c35bfc07b562-kube-api-access-9x9wz\") on node \"crc\" DevicePath \"\"" Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.688276 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-657gr" event={"ID":"5e4929a1-70d3-4e76-a561-c35bfc07b562","Type":"ContainerDied","Data":"eecda434d8abce33d8ef544f4d1c7659a9c5e5b06a357b3b5a377cebced83a7b"} Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.688316 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eecda434d8abce33d8ef544f4d1c7659a9c5e5b06a357b3b5a377cebced83a7b" Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.688356 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-657gr" Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.728299 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35bfa86c-42fd-4c56-8b93-599e84fb52df" path="/var/lib/kubelet/pods/35bfa86c-42fd-4c56-8b93-599e84fb52df/volumes" Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.760913 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qcxnr"] Feb 02 13:31:21 crc kubenswrapper[4955]: E0202 13:31:21.761424 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e4929a1-70d3-4e76-a561-c35bfc07b562" containerName="ssh-known-hosts-edpm-deployment" Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.761487 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4929a1-70d3-4e76-a561-c35bfc07b562" containerName="ssh-known-hosts-edpm-deployment" Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.761779 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e4929a1-70d3-4e76-a561-c35bfc07b562" containerName="ssh-known-hosts-edpm-deployment" Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.762441 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qcxnr" Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.764982 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.765152 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.765463 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-65wvh" Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.766106 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.770676 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qcxnr"] Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.891497 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4babc7e9-4c02-4643-a14f-719526d95e55-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qcxnr\" (UID: \"4babc7e9-4c02-4643-a14f-719526d95e55\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qcxnr" Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.891572 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6fdm\" (UniqueName: \"kubernetes.io/projected/4babc7e9-4c02-4643-a14f-719526d95e55-kube-api-access-k6fdm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qcxnr\" (UID: \"4babc7e9-4c02-4643-a14f-719526d95e55\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qcxnr" Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.891602 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4babc7e9-4c02-4643-a14f-719526d95e55-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qcxnr\" (UID: \"4babc7e9-4c02-4643-a14f-719526d95e55\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qcxnr" Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.994551 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4babc7e9-4c02-4643-a14f-719526d95e55-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qcxnr\" (UID: \"4babc7e9-4c02-4643-a14f-719526d95e55\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qcxnr" Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.994996 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6fdm\" (UniqueName: \"kubernetes.io/projected/4babc7e9-4c02-4643-a14f-719526d95e55-kube-api-access-k6fdm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qcxnr\" (UID: \"4babc7e9-4c02-4643-a14f-719526d95e55\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qcxnr" Feb 02 13:31:21 crc kubenswrapper[4955]: I0202 13:31:21.995188 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4babc7e9-4c02-4643-a14f-719526d95e55-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qcxnr\" (UID: \"4babc7e9-4c02-4643-a14f-719526d95e55\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qcxnr" Feb 02 13:31:22 crc kubenswrapper[4955]: I0202 13:31:22.000813 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4babc7e9-4c02-4643-a14f-719526d95e55-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qcxnr\" (UID: \"4babc7e9-4c02-4643-a14f-719526d95e55\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qcxnr" Feb 02 13:31:22 crc kubenswrapper[4955]: I0202 13:31:22.000831 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4babc7e9-4c02-4643-a14f-719526d95e55-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qcxnr\" (UID: \"4babc7e9-4c02-4643-a14f-719526d95e55\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qcxnr" Feb 02 13:31:22 crc kubenswrapper[4955]: I0202 13:31:22.019974 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6fdm\" (UniqueName: \"kubernetes.io/projected/4babc7e9-4c02-4643-a14f-719526d95e55-kube-api-access-k6fdm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qcxnr\" (UID: \"4babc7e9-4c02-4643-a14f-719526d95e55\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qcxnr" Feb 02 13:31:22 crc kubenswrapper[4955]: I0202 13:31:22.080461 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qcxnr" Feb 02 13:31:22 crc kubenswrapper[4955]: I0202 13:31:22.627833 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qcxnr"] Feb 02 13:31:22 crc kubenswrapper[4955]: I0202 13:31:22.697852 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qcxnr" event={"ID":"4babc7e9-4c02-4643-a14f-719526d95e55","Type":"ContainerStarted","Data":"cf0ff3880494a35a1b5d2735908f6393d3699d0c901f4c4daf029e09a985d55a"} Feb 02 13:31:23 crc kubenswrapper[4955]: I0202 13:31:23.709249 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qcxnr" event={"ID":"4babc7e9-4c02-4643-a14f-719526d95e55","Type":"ContainerStarted","Data":"797a8a8175b49ac7549d6aba17d6e9238f8e1058314d28f937970c6fe64d414c"} Feb 02 13:31:23 crc kubenswrapper[4955]: I0202 13:31:23.731762 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qcxnr" podStartSLOduration=2.336357606 podStartE2EDuration="2.731737104s" podCreationTimestamp="2026-02-02 13:31:21 +0000 UTC" firstStartedPulling="2026-02-02 13:31:22.62675956 +0000 UTC m=+1733.539096010" lastFinishedPulling="2026-02-02 13:31:23.022139058 +0000 UTC m=+1733.934475508" observedRunningTime="2026-02-02 13:31:23.72274909 +0000 UTC m=+1734.635085610" watchObservedRunningTime="2026-02-02 13:31:23.731737104 +0000 UTC m=+1734.644073604" Feb 02 13:31:30 crc kubenswrapper[4955]: I0202 13:31:30.716088 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:31:30 crc kubenswrapper[4955]: E0202 13:31:30.716857 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:31:30 crc kubenswrapper[4955]: I0202 13:31:30.774425 4955 generic.go:334] "Generic (PLEG): container finished" podID="4babc7e9-4c02-4643-a14f-719526d95e55" containerID="797a8a8175b49ac7549d6aba17d6e9238f8e1058314d28f937970c6fe64d414c" exitCode=0 Feb 02 13:31:30 crc kubenswrapper[4955]: I0202 13:31:30.774474 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qcxnr" event={"ID":"4babc7e9-4c02-4643-a14f-719526d95e55","Type":"ContainerDied","Data":"797a8a8175b49ac7549d6aba17d6e9238f8e1058314d28f937970c6fe64d414c"} Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.160792 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qcxnr" Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.197887 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6fdm\" (UniqueName: \"kubernetes.io/projected/4babc7e9-4c02-4643-a14f-719526d95e55-kube-api-access-k6fdm\") pod \"4babc7e9-4c02-4643-a14f-719526d95e55\" (UID: \"4babc7e9-4c02-4643-a14f-719526d95e55\") " Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.199496 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4babc7e9-4c02-4643-a14f-719526d95e55-ssh-key-openstack-edpm-ipam\") pod \"4babc7e9-4c02-4643-a14f-719526d95e55\" (UID: \"4babc7e9-4c02-4643-a14f-719526d95e55\") " Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.199551 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4babc7e9-4c02-4643-a14f-719526d95e55-inventory\") pod \"4babc7e9-4c02-4643-a14f-719526d95e55\" (UID: \"4babc7e9-4c02-4643-a14f-719526d95e55\") " Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.206809 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4babc7e9-4c02-4643-a14f-719526d95e55-kube-api-access-k6fdm" (OuterVolumeSpecName: "kube-api-access-k6fdm") pod "4babc7e9-4c02-4643-a14f-719526d95e55" (UID: "4babc7e9-4c02-4643-a14f-719526d95e55"). InnerVolumeSpecName "kube-api-access-k6fdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.226938 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4babc7e9-4c02-4643-a14f-719526d95e55-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4babc7e9-4c02-4643-a14f-719526d95e55" (UID: "4babc7e9-4c02-4643-a14f-719526d95e55"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.228071 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4babc7e9-4c02-4643-a14f-719526d95e55-inventory" (OuterVolumeSpecName: "inventory") pod "4babc7e9-4c02-4643-a14f-719526d95e55" (UID: "4babc7e9-4c02-4643-a14f-719526d95e55"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.301072 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4babc7e9-4c02-4643-a14f-719526d95e55-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.301272 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4babc7e9-4c02-4643-a14f-719526d95e55-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.301327 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6fdm\" (UniqueName: \"kubernetes.io/projected/4babc7e9-4c02-4643-a14f-719526d95e55-kube-api-access-k6fdm\") on node \"crc\" DevicePath \"\"" Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.797523 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qcxnr" event={"ID":"4babc7e9-4c02-4643-a14f-719526d95e55","Type":"ContainerDied","Data":"cf0ff3880494a35a1b5d2735908f6393d3699d0c901f4c4daf029e09a985d55a"} Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.797848 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf0ff3880494a35a1b5d2735908f6393d3699d0c901f4c4daf029e09a985d55a" Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.797667 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qcxnr" Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.876505 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s"] Feb 02 13:31:32 crc kubenswrapper[4955]: E0202 13:31:32.877075 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4babc7e9-4c02-4643-a14f-719526d95e55" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.877099 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="4babc7e9-4c02-4643-a14f-719526d95e55" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.877380 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="4babc7e9-4c02-4643-a14f-719526d95e55" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.878183 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s" Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.880113 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.880656 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-65wvh" Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.880885 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.880935 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.885527 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s"] Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.915611 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/660c8287-e36e-4216-8834-45913aa22480-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s\" (UID: \"660c8287-e36e-4216-8834-45913aa22480\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s" Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.915813 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/660c8287-e36e-4216-8834-45913aa22480-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s\" (UID: \"660c8287-e36e-4216-8834-45913aa22480\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s" Feb 02 13:31:32 crc kubenswrapper[4955]: I0202 13:31:32.916028 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f48k\" (UniqueName: \"kubernetes.io/projected/660c8287-e36e-4216-8834-45913aa22480-kube-api-access-4f48k\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s\" (UID: \"660c8287-e36e-4216-8834-45913aa22480\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s" Feb 02 13:31:33 crc kubenswrapper[4955]: I0202 13:31:33.017614 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f48k\" (UniqueName: \"kubernetes.io/projected/660c8287-e36e-4216-8834-45913aa22480-kube-api-access-4f48k\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s\" (UID: \"660c8287-e36e-4216-8834-45913aa22480\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s" Feb 02 13:31:33 crc kubenswrapper[4955]: I0202 13:31:33.017689 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/660c8287-e36e-4216-8834-45913aa22480-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s\" (UID: \"660c8287-e36e-4216-8834-45913aa22480\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s" Feb 02 13:31:33 crc kubenswrapper[4955]: I0202 13:31:33.017779 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/660c8287-e36e-4216-8834-45913aa22480-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s\" (UID: \"660c8287-e36e-4216-8834-45913aa22480\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s" Feb 02 13:31:33 crc kubenswrapper[4955]: I0202 13:31:33.022276 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/660c8287-e36e-4216-8834-45913aa22480-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s\" (UID: \"660c8287-e36e-4216-8834-45913aa22480\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s" Feb 02 13:31:33 crc kubenswrapper[4955]: I0202 13:31:33.022303 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/660c8287-e36e-4216-8834-45913aa22480-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s\" (UID: \"660c8287-e36e-4216-8834-45913aa22480\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s" Feb 02 13:31:33 crc kubenswrapper[4955]: I0202 13:31:33.038158 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f48k\" (UniqueName: \"kubernetes.io/projected/660c8287-e36e-4216-8834-45913aa22480-kube-api-access-4f48k\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s\" (UID: \"660c8287-e36e-4216-8834-45913aa22480\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s" Feb 02 13:31:33 crc kubenswrapper[4955]: I0202 13:31:33.197792 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s" Feb 02 13:31:33 crc kubenswrapper[4955]: I0202 13:31:33.743104 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s"] Feb 02 13:31:33 crc kubenswrapper[4955]: I0202 13:31:33.808702 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s" event={"ID":"660c8287-e36e-4216-8834-45913aa22480","Type":"ContainerStarted","Data":"328e88dff731a088ebe25c131790c2bc5178fb8cb493c38c3b29fef3c6cc6938"} Feb 02 13:31:34 crc kubenswrapper[4955]: I0202 13:31:34.819258 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s" event={"ID":"660c8287-e36e-4216-8834-45913aa22480","Type":"ContainerStarted","Data":"2183ef5267015d4ee27f3ac2947073cdb2c498dd41681149df6bcd66e01c0e47"} Feb 02 13:31:34 crc kubenswrapper[4955]: I0202 13:31:34.841739 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s" podStartSLOduration=2.409437871 podStartE2EDuration="2.841717464s" podCreationTimestamp="2026-02-02 13:31:32 +0000 UTC" firstStartedPulling="2026-02-02 13:31:33.748201856 +0000 UTC m=+1744.660538316" lastFinishedPulling="2026-02-02 13:31:34.180481459 +0000 UTC m=+1745.092817909" observedRunningTime="2026-02-02 13:31:34.835008986 +0000 UTC m=+1745.747345446" watchObservedRunningTime="2026-02-02 13:31:34.841717464 +0000 UTC m=+1745.754053914" Feb 02 13:31:39 crc kubenswrapper[4955]: I0202 13:31:39.777429 4955 scope.go:117] "RemoveContainer" containerID="f251c6b4795788977dd41e0277ee1fce2fd8e91dfa29760dadb6bebd0367034e" Feb 02 13:31:39 crc kubenswrapper[4955]: I0202 13:31:39.825278 4955 scope.go:117] "RemoveContainer" containerID="0b18ccb97e9e4af5a4d9edc1c57ce01982e1654f12d19b171ec8871f60ae6bf9" Feb 02 13:31:42 crc kubenswrapper[4955]: I0202 13:31:42.046787 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q5zzn"] Feb 02 13:31:42 crc kubenswrapper[4955]: I0202 13:31:42.055511 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q5zzn"] Feb 02 13:31:42 crc kubenswrapper[4955]: I0202 13:31:42.717585 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:31:42 crc kubenswrapper[4955]: E0202 13:31:42.718201 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:31:43 crc kubenswrapper[4955]: I0202 13:31:43.727209 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b673660-45c3-419e-af6f-66cb08d272e0" path="/var/lib/kubelet/pods/4b673660-45c3-419e-af6f-66cb08d272e0/volumes" Feb 02 13:31:43 crc kubenswrapper[4955]: I0202 13:31:43.908172 4955 generic.go:334] "Generic (PLEG): container finished" podID="660c8287-e36e-4216-8834-45913aa22480" containerID="2183ef5267015d4ee27f3ac2947073cdb2c498dd41681149df6bcd66e01c0e47" exitCode=0 Feb 02 13:31:43 crc kubenswrapper[4955]: I0202 13:31:43.908247 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s" event={"ID":"660c8287-e36e-4216-8834-45913aa22480","Type":"ContainerDied","Data":"2183ef5267015d4ee27f3ac2947073cdb2c498dd41681149df6bcd66e01c0e47"} Feb 02 13:31:45 crc kubenswrapper[4955]: I0202 13:31:45.376874 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s" Feb 02 13:31:45 crc kubenswrapper[4955]: I0202 13:31:45.467329 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f48k\" (UniqueName: \"kubernetes.io/projected/660c8287-e36e-4216-8834-45913aa22480-kube-api-access-4f48k\") pod \"660c8287-e36e-4216-8834-45913aa22480\" (UID: \"660c8287-e36e-4216-8834-45913aa22480\") " Feb 02 13:31:45 crc kubenswrapper[4955]: I0202 13:31:45.467416 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/660c8287-e36e-4216-8834-45913aa22480-ssh-key-openstack-edpm-ipam\") pod \"660c8287-e36e-4216-8834-45913aa22480\" (UID: \"660c8287-e36e-4216-8834-45913aa22480\") " Feb 02 13:31:45 crc kubenswrapper[4955]: I0202 13:31:45.467545 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/660c8287-e36e-4216-8834-45913aa22480-inventory\") pod \"660c8287-e36e-4216-8834-45913aa22480\" (UID: \"660c8287-e36e-4216-8834-45913aa22480\") " Feb 02 13:31:45 crc kubenswrapper[4955]: I0202 13:31:45.472857 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/660c8287-e36e-4216-8834-45913aa22480-kube-api-access-4f48k" (OuterVolumeSpecName: "kube-api-access-4f48k") pod "660c8287-e36e-4216-8834-45913aa22480" (UID: "660c8287-e36e-4216-8834-45913aa22480"). InnerVolumeSpecName "kube-api-access-4f48k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:31:45 crc kubenswrapper[4955]: I0202 13:31:45.495893 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/660c8287-e36e-4216-8834-45913aa22480-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "660c8287-e36e-4216-8834-45913aa22480" (UID: "660c8287-e36e-4216-8834-45913aa22480"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:31:45 crc kubenswrapper[4955]: I0202 13:31:45.498107 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/660c8287-e36e-4216-8834-45913aa22480-inventory" (OuterVolumeSpecName: "inventory") pod "660c8287-e36e-4216-8834-45913aa22480" (UID: "660c8287-e36e-4216-8834-45913aa22480"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:31:45 crc kubenswrapper[4955]: I0202 13:31:45.569294 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/660c8287-e36e-4216-8834-45913aa22480-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 13:31:45 crc kubenswrapper[4955]: I0202 13:31:45.569332 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/660c8287-e36e-4216-8834-45913aa22480-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 13:31:45 crc kubenswrapper[4955]: I0202 13:31:45.569342 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f48k\" (UniqueName: \"kubernetes.io/projected/660c8287-e36e-4216-8834-45913aa22480-kube-api-access-4f48k\") on node \"crc\" DevicePath \"\"" Feb 02 13:31:45 crc kubenswrapper[4955]: I0202 13:31:45.925218 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s" event={"ID":"660c8287-e36e-4216-8834-45913aa22480","Type":"ContainerDied","Data":"328e88dff731a088ebe25c131790c2bc5178fb8cb493c38c3b29fef3c6cc6938"} Feb 02 13:31:45 crc kubenswrapper[4955]: I0202 13:31:45.925262 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="328e88dff731a088ebe25c131790c2bc5178fb8cb493c38c3b29fef3c6cc6938" Feb 02 13:31:45 crc kubenswrapper[4955]: I0202 13:31:45.925318 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s" Feb 02 13:31:45 crc kubenswrapper[4955]: E0202 13:31:45.926309 4955 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod660c8287_e36e_4216_8834_45913aa22480.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod660c8287_e36e_4216_8834_45913aa22480.slice/crio-328e88dff731a088ebe25c131790c2bc5178fb8cb493c38c3b29fef3c6cc6938\": RecentStats: unable to find data in memory cache]" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.000941 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt"] Feb 02 13:31:46 crc kubenswrapper[4955]: E0202 13:31:46.001369 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="660c8287-e36e-4216-8834-45913aa22480" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.001386 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="660c8287-e36e-4216-8834-45913aa22480" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.001591 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="660c8287-e36e-4216-8834-45913aa22480" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.002423 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.005634 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.005655 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.005775 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.005994 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.006074 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.006305 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-65wvh" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.006518 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.009144 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.017983 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt"] Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.079356 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.079658 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.079778 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.079879 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn6pw\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-kube-api-access-pn6pw\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.079962 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.080040 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.080138 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.080227 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.080306 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.080406 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.080494 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.080660 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.080728 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.080887 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.183743 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn6pw\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-kube-api-access-pn6pw\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.183871 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.183910 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.183966 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.184021 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.184048 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.184076 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.184112 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.184142 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.184171 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.184202 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.184258 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.184283 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.184307 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.190762 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.191055 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.190849 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.190910 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.191333 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.191443 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.193810 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.193948 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.193956 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.195962 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.197163 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.198086 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.208582 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn6pw\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-kube-api-access-pn6pw\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.216340 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.321487 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.875260 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt"] Feb 02 13:31:46 crc kubenswrapper[4955]: W0202 13:31:46.877586 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91b67e94_3e12_478a_8691_2768084b6229.slice/crio-d7a98db7332578b65efa4ebd87a23fa64f0a771eb942a026980badc4346b97f9 WatchSource:0}: Error finding container d7a98db7332578b65efa4ebd87a23fa64f0a771eb942a026980badc4346b97f9: Status 404 returned error can't find the container with id d7a98db7332578b65efa4ebd87a23fa64f0a771eb942a026980badc4346b97f9 Feb 02 13:31:46 crc kubenswrapper[4955]: I0202 13:31:46.933850 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" event={"ID":"91b67e94-3e12-478a-8691-2768084b6229","Type":"ContainerStarted","Data":"d7a98db7332578b65efa4ebd87a23fa64f0a771eb942a026980badc4346b97f9"} Feb 02 13:31:47 crc kubenswrapper[4955]: I0202 13:31:47.944048 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" event={"ID":"91b67e94-3e12-478a-8691-2768084b6229","Type":"ContainerStarted","Data":"191bfbe0834d02070368d2af607c0637af39a7cea6594c72ef75be3754e6848d"} Feb 02 13:31:55 crc kubenswrapper[4955]: I0202 13:31:55.717137 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:31:55 crc kubenswrapper[4955]: E0202 13:31:55.717933 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:32:06 crc kubenswrapper[4955]: I0202 13:32:06.039197 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" podStartSLOduration=20.438970115 podStartE2EDuration="21.039169292s" podCreationTimestamp="2026-02-02 13:31:45 +0000 UTC" firstStartedPulling="2026-02-02 13:31:46.8802915 +0000 UTC m=+1757.792627950" lastFinishedPulling="2026-02-02 13:31:47.480490677 +0000 UTC m=+1758.392827127" observedRunningTime="2026-02-02 13:31:47.97231696 +0000 UTC m=+1758.884653410" watchObservedRunningTime="2026-02-02 13:32:06.039169292 +0000 UTC m=+1776.951505752" Feb 02 13:32:06 crc kubenswrapper[4955]: I0202 13:32:06.043904 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-5k49j"] Feb 02 13:32:06 crc kubenswrapper[4955]: I0202 13:32:06.055883 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-5k49j"] Feb 02 13:32:07 crc kubenswrapper[4955]: I0202 13:32:07.726193 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d6fd940-a5a3-444e-8401-51a972389d19" path="/var/lib/kubelet/pods/2d6fd940-a5a3-444e-8401-51a972389d19/volumes" Feb 02 13:32:08 crc kubenswrapper[4955]: I0202 13:32:08.716141 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:32:08 crc kubenswrapper[4955]: E0202 13:32:08.716519 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:32:21 crc kubenswrapper[4955]: I0202 13:32:21.229522 4955 generic.go:334] "Generic (PLEG): container finished" podID="91b67e94-3e12-478a-8691-2768084b6229" containerID="191bfbe0834d02070368d2af607c0637af39a7cea6594c72ef75be3754e6848d" exitCode=0 Feb 02 13:32:21 crc kubenswrapper[4955]: I0202 13:32:21.229671 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" event={"ID":"91b67e94-3e12-478a-8691-2768084b6229","Type":"ContainerDied","Data":"191bfbe0834d02070368d2af607c0637af39a7cea6594c72ef75be3754e6848d"} Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.642467 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.704576 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-ovn-default-certs-0\") pod \"91b67e94-3e12-478a-8691-2768084b6229\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.704620 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-bootstrap-combined-ca-bundle\") pod \"91b67e94-3e12-478a-8691-2768084b6229\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.704651 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-repo-setup-combined-ca-bundle\") pod \"91b67e94-3e12-478a-8691-2768084b6229\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.704679 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-neutron-metadata-combined-ca-bundle\") pod \"91b67e94-3e12-478a-8691-2768084b6229\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.704722 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn6pw\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-kube-api-access-pn6pw\") pod \"91b67e94-3e12-478a-8691-2768084b6229\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.704789 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-ovn-combined-ca-bundle\") pod \"91b67e94-3e12-478a-8691-2768084b6229\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.705667 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"91b67e94-3e12-478a-8691-2768084b6229\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.705773 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"91b67e94-3e12-478a-8691-2768084b6229\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.705994 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-inventory\") pod \"91b67e94-3e12-478a-8691-2768084b6229\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.706310 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-libvirt-combined-ca-bundle\") pod \"91b67e94-3e12-478a-8691-2768084b6229\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.706369 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-ssh-key-openstack-edpm-ipam\") pod \"91b67e94-3e12-478a-8691-2768084b6229\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.706499 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-nova-combined-ca-bundle\") pod \"91b67e94-3e12-478a-8691-2768084b6229\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.706595 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-telemetry-combined-ca-bundle\") pod \"91b67e94-3e12-478a-8691-2768084b6229\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.706656 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"91b67e94-3e12-478a-8691-2768084b6229\" (UID: \"91b67e94-3e12-478a-8691-2768084b6229\") " Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.710466 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "91b67e94-3e12-478a-8691-2768084b6229" (UID: "91b67e94-3e12-478a-8691-2768084b6229"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.711620 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "91b67e94-3e12-478a-8691-2768084b6229" (UID: "91b67e94-3e12-478a-8691-2768084b6229"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.711846 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-kube-api-access-pn6pw" (OuterVolumeSpecName: "kube-api-access-pn6pw") pod "91b67e94-3e12-478a-8691-2768084b6229" (UID: "91b67e94-3e12-478a-8691-2768084b6229"). InnerVolumeSpecName "kube-api-access-pn6pw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.714938 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "91b67e94-3e12-478a-8691-2768084b6229" (UID: "91b67e94-3e12-478a-8691-2768084b6229"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.714989 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "91b67e94-3e12-478a-8691-2768084b6229" (UID: "91b67e94-3e12-478a-8691-2768084b6229"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.715049 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "91b67e94-3e12-478a-8691-2768084b6229" (UID: "91b67e94-3e12-478a-8691-2768084b6229"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.715075 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "91b67e94-3e12-478a-8691-2768084b6229" (UID: "91b67e94-3e12-478a-8691-2768084b6229"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.715636 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "91b67e94-3e12-478a-8691-2768084b6229" (UID: "91b67e94-3e12-478a-8691-2768084b6229"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.715766 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "91b67e94-3e12-478a-8691-2768084b6229" (UID: "91b67e94-3e12-478a-8691-2768084b6229"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.715940 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "91b67e94-3e12-478a-8691-2768084b6229" (UID: "91b67e94-3e12-478a-8691-2768084b6229"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.716574 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "91b67e94-3e12-478a-8691-2768084b6229" (UID: "91b67e94-3e12-478a-8691-2768084b6229"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.716738 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "91b67e94-3e12-478a-8691-2768084b6229" (UID: "91b67e94-3e12-478a-8691-2768084b6229"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.739132 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-inventory" (OuterVolumeSpecName: "inventory") pod "91b67e94-3e12-478a-8691-2768084b6229" (UID: "91b67e94-3e12-478a-8691-2768084b6229"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.740009 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "91b67e94-3e12-478a-8691-2768084b6229" (UID: "91b67e94-3e12-478a-8691-2768084b6229"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.809297 4955 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.809483 4955 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.809541 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.809642 4955 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.809704 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.809778 4955 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.809835 4955 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.809888 4955 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.809939 4955 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.809995 4955 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.810047 4955 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.810099 4955 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.810149 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn6pw\" (UniqueName: \"kubernetes.io/projected/91b67e94-3e12-478a-8691-2768084b6229-kube-api-access-pn6pw\") on node \"crc\" DevicePath \"\"" Feb 02 13:32:22 crc kubenswrapper[4955]: I0202 13:32:22.810312 4955 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b67e94-3e12-478a-8691-2768084b6229-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.248714 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" event={"ID":"91b67e94-3e12-478a-8691-2768084b6229","Type":"ContainerDied","Data":"d7a98db7332578b65efa4ebd87a23fa64f0a771eb942a026980badc4346b97f9"} Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.249137 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7a98db7332578b65efa4ebd87a23fa64f0a771eb942a026980badc4346b97f9" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.248799 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.344731 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw"] Feb 02 13:32:23 crc kubenswrapper[4955]: E0202 13:32:23.345120 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91b67e94-3e12-478a-8691-2768084b6229" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.345136 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="91b67e94-3e12-478a-8691-2768084b6229" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.345313 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="91b67e94-3e12-478a-8691-2768084b6229" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.346099 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.348268 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-65wvh" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.348605 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.348797 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.348957 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.354395 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.356663 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw"] Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.424461 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wk7cw\" (UID: \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.424612 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wk7cw\" (UID: \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.424735 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wk7cw\" (UID: \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.424802 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vzc8\" (UniqueName: \"kubernetes.io/projected/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-kube-api-access-6vzc8\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wk7cw\" (UID: \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.424841 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wk7cw\" (UID: \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.527041 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wk7cw\" (UID: \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.527154 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wk7cw\" (UID: \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.527200 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vzc8\" (UniqueName: \"kubernetes.io/projected/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-kube-api-access-6vzc8\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wk7cw\" (UID: \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.527229 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wk7cw\" (UID: \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.527277 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wk7cw\" (UID: \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.528434 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wk7cw\" (UID: \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.532611 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wk7cw\" (UID: \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.532614 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wk7cw\" (UID: \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.533404 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wk7cw\" (UID: \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.548026 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vzc8\" (UniqueName: \"kubernetes.io/projected/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-kube-api-access-6vzc8\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wk7cw\" (UID: \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.662350 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" Feb 02 13:32:23 crc kubenswrapper[4955]: I0202 13:32:23.716653 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:32:23 crc kubenswrapper[4955]: E0202 13:32:23.716892 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:32:24 crc kubenswrapper[4955]: I0202 13:32:24.185123 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw"] Feb 02 13:32:24 crc kubenswrapper[4955]: I0202 13:32:24.186040 4955 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 13:32:24 crc kubenswrapper[4955]: I0202 13:32:24.258760 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" event={"ID":"0a7ecd9e-7038-4eda-ae0b-833ecf729f48","Type":"ContainerStarted","Data":"3c84f4cfeaa8e979e168c3d77dfb20e73d915605cfc94248198f1316391bef93"} Feb 02 13:32:25 crc kubenswrapper[4955]: I0202 13:32:25.267948 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" event={"ID":"0a7ecd9e-7038-4eda-ae0b-833ecf729f48","Type":"ContainerStarted","Data":"1957c222fa5b35da1ca6aad5213c90af4cdcdbe46a28d809504283f56618ed5e"} Feb 02 13:32:25 crc kubenswrapper[4955]: I0202 13:32:25.289696 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" podStartSLOduration=1.877686419 podStartE2EDuration="2.289676798s" podCreationTimestamp="2026-02-02 13:32:23 +0000 UTC" firstStartedPulling="2026-02-02 13:32:24.185794413 +0000 UTC m=+1795.098130863" lastFinishedPulling="2026-02-02 13:32:24.597784802 +0000 UTC m=+1795.510121242" observedRunningTime="2026-02-02 13:32:25.281011254 +0000 UTC m=+1796.193347704" watchObservedRunningTime="2026-02-02 13:32:25.289676798 +0000 UTC m=+1796.202013248" Feb 02 13:32:35 crc kubenswrapper[4955]: I0202 13:32:35.716440 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:32:36 crc kubenswrapper[4955]: I0202 13:32:36.365235 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerStarted","Data":"c539116cb659020849c2a4ade13e1e71d2570f2495d5509f8008d67246b924b4"} Feb 02 13:32:39 crc kubenswrapper[4955]: I0202 13:32:39.911741 4955 scope.go:117] "RemoveContainer" containerID="a2aa9c2192728b816dad541badfc5174cb614eddd0e4bf91c9c548ea7db7a2b3" Feb 02 13:32:39 crc kubenswrapper[4955]: I0202 13:32:39.965183 4955 scope.go:117] "RemoveContainer" containerID="f2ecabd0323a48a97418b5e95039ea3e8010eb1ff6b95df11e39612af5422926" Feb 02 13:33:21 crc kubenswrapper[4955]: I0202 13:33:21.747402 4955 generic.go:334] "Generic (PLEG): container finished" podID="0a7ecd9e-7038-4eda-ae0b-833ecf729f48" containerID="1957c222fa5b35da1ca6aad5213c90af4cdcdbe46a28d809504283f56618ed5e" exitCode=0 Feb 02 13:33:21 crc kubenswrapper[4955]: I0202 13:33:21.747625 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" event={"ID":"0a7ecd9e-7038-4eda-ae0b-833ecf729f48","Type":"ContainerDied","Data":"1957c222fa5b35da1ca6aad5213c90af4cdcdbe46a28d809504283f56618ed5e"} Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.191435 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.391826 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-ssh-key-openstack-edpm-ipam\") pod \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\" (UID: \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\") " Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.391918 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vzc8\" (UniqueName: \"kubernetes.io/projected/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-kube-api-access-6vzc8\") pod \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\" (UID: \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\") " Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.392142 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-inventory\") pod \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\" (UID: \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\") " Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.392256 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-ovncontroller-config-0\") pod \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\" (UID: \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\") " Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.392298 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-ovn-combined-ca-bundle\") pod \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\" (UID: \"0a7ecd9e-7038-4eda-ae0b-833ecf729f48\") " Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.399113 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-kube-api-access-6vzc8" (OuterVolumeSpecName: "kube-api-access-6vzc8") pod "0a7ecd9e-7038-4eda-ae0b-833ecf729f48" (UID: "0a7ecd9e-7038-4eda-ae0b-833ecf729f48"). InnerVolumeSpecName "kube-api-access-6vzc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.399183 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "0a7ecd9e-7038-4eda-ae0b-833ecf729f48" (UID: "0a7ecd9e-7038-4eda-ae0b-833ecf729f48"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.418350 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "0a7ecd9e-7038-4eda-ae0b-833ecf729f48" (UID: "0a7ecd9e-7038-4eda-ae0b-833ecf729f48"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.419170 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-inventory" (OuterVolumeSpecName: "inventory") pod "0a7ecd9e-7038-4eda-ae0b-833ecf729f48" (UID: "0a7ecd9e-7038-4eda-ae0b-833ecf729f48"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.421404 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0a7ecd9e-7038-4eda-ae0b-833ecf729f48" (UID: "0a7ecd9e-7038-4eda-ae0b-833ecf729f48"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.494453 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.494751 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vzc8\" (UniqueName: \"kubernetes.io/projected/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-kube-api-access-6vzc8\") on node \"crc\" DevicePath \"\"" Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.494761 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.494770 4955 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.494782 4955 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a7ecd9e-7038-4eda-ae0b-833ecf729f48-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.764893 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" event={"ID":"0a7ecd9e-7038-4eda-ae0b-833ecf729f48","Type":"ContainerDied","Data":"3c84f4cfeaa8e979e168c3d77dfb20e73d915605cfc94248198f1316391bef93"} Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.764933 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c84f4cfeaa8e979e168c3d77dfb20e73d915605cfc94248198f1316391bef93" Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.764994 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wk7cw" Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.864512 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w"] Feb 02 13:33:23 crc kubenswrapper[4955]: E0202 13:33:23.865012 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7ecd9e-7038-4eda-ae0b-833ecf729f48" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.865040 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7ecd9e-7038-4eda-ae0b-833ecf729f48" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.865333 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a7ecd9e-7038-4eda-ae0b-833ecf729f48" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.866448 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.869093 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.869166 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.869271 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.870604 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-65wvh" Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.870932 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.874863 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 02 13:33:23 crc kubenswrapper[4955]: I0202 13:33:23.876651 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w"] Feb 02 13:33:24 crc kubenswrapper[4955]: I0202 13:33:24.002867 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" Feb 02 13:33:24 crc kubenswrapper[4955]: I0202 13:33:24.002955 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" Feb 02 13:33:24 crc kubenswrapper[4955]: I0202 13:33:24.003510 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" Feb 02 13:33:24 crc kubenswrapper[4955]: I0202 13:33:24.003677 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2ml6\" (UniqueName: \"kubernetes.io/projected/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-kube-api-access-t2ml6\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" Feb 02 13:33:24 crc kubenswrapper[4955]: I0202 13:33:24.003724 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" Feb 02 13:33:24 crc kubenswrapper[4955]: I0202 13:33:24.003781 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" Feb 02 13:33:24 crc kubenswrapper[4955]: I0202 13:33:24.104674 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" Feb 02 13:33:24 crc kubenswrapper[4955]: I0202 13:33:24.104739 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" Feb 02 13:33:24 crc kubenswrapper[4955]: I0202 13:33:24.104810 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" Feb 02 13:33:24 crc kubenswrapper[4955]: I0202 13:33:24.104842 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" Feb 02 13:33:24 crc kubenswrapper[4955]: I0202 13:33:24.104868 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" Feb 02 13:33:24 crc kubenswrapper[4955]: I0202 13:33:24.104925 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2ml6\" (UniqueName: \"kubernetes.io/projected/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-kube-api-access-t2ml6\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" Feb 02 13:33:24 crc kubenswrapper[4955]: I0202 13:33:24.108641 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" Feb 02 13:33:24 crc kubenswrapper[4955]: I0202 13:33:24.108750 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" Feb 02 13:33:24 crc kubenswrapper[4955]: I0202 13:33:24.108938 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" Feb 02 13:33:24 crc kubenswrapper[4955]: I0202 13:33:24.109984 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" Feb 02 13:33:24 crc kubenswrapper[4955]: I0202 13:33:24.111518 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" Feb 02 13:33:24 crc kubenswrapper[4955]: I0202 13:33:24.128821 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2ml6\" (UniqueName: \"kubernetes.io/projected/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-kube-api-access-t2ml6\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" Feb 02 13:33:24 crc kubenswrapper[4955]: I0202 13:33:24.181413 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" Feb 02 13:33:24 crc kubenswrapper[4955]: I0202 13:33:24.711432 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w"] Feb 02 13:33:24 crc kubenswrapper[4955]: I0202 13:33:24.774496 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" event={"ID":"d5685fb2-ca69-402f-baee-9fb7b6ba6dba","Type":"ContainerStarted","Data":"38cc3385cb87988c6ad2bd4a562d07ea3af91e8cdb3beaac9947b18222152213"} Feb 02 13:33:25 crc kubenswrapper[4955]: I0202 13:33:25.786895 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" event={"ID":"d5685fb2-ca69-402f-baee-9fb7b6ba6dba","Type":"ContainerStarted","Data":"f8bee584d3d0af060f21a07025458e4f793a0103eb6daba5bb78c2308e2e466f"} Feb 02 13:33:25 crc kubenswrapper[4955]: I0202 13:33:25.805744 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" podStartSLOduration=2.395617757 podStartE2EDuration="2.805724309s" podCreationTimestamp="2026-02-02 13:33:23 +0000 UTC" firstStartedPulling="2026-02-02 13:33:24.717125294 +0000 UTC m=+1855.629461744" lastFinishedPulling="2026-02-02 13:33:25.127231846 +0000 UTC m=+1856.039568296" observedRunningTime="2026-02-02 13:33:25.800475669 +0000 UTC m=+1856.712812119" watchObservedRunningTime="2026-02-02 13:33:25.805724309 +0000 UTC m=+1856.718060759" Feb 02 13:34:10 crc kubenswrapper[4955]: I0202 13:34:10.148079 4955 generic.go:334] "Generic (PLEG): container finished" podID="d5685fb2-ca69-402f-baee-9fb7b6ba6dba" containerID="f8bee584d3d0af060f21a07025458e4f793a0103eb6daba5bb78c2308e2e466f" exitCode=0 Feb 02 13:34:10 crc kubenswrapper[4955]: I0202 13:34:10.148116 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" event={"ID":"d5685fb2-ca69-402f-baee-9fb7b6ba6dba","Type":"ContainerDied","Data":"f8bee584d3d0af060f21a07025458e4f793a0103eb6daba5bb78c2308e2e466f"} Feb 02 13:34:11 crc kubenswrapper[4955]: I0202 13:34:11.593870 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" Feb 02 13:34:11 crc kubenswrapper[4955]: I0202 13:34:11.656268 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2ml6\" (UniqueName: \"kubernetes.io/projected/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-kube-api-access-t2ml6\") pod \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " Feb 02 13:34:11 crc kubenswrapper[4955]: I0202 13:34:11.656323 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-nova-metadata-neutron-config-0\") pod \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " Feb 02 13:34:11 crc kubenswrapper[4955]: I0202 13:34:11.656359 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-ssh-key-openstack-edpm-ipam\") pod \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " Feb 02 13:34:11 crc kubenswrapper[4955]: I0202 13:34:11.656407 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-neutron-ovn-metadata-agent-neutron-config-0\") pod \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " Feb 02 13:34:11 crc kubenswrapper[4955]: I0202 13:34:11.656438 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-inventory\") pod \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " Feb 02 13:34:11 crc kubenswrapper[4955]: I0202 13:34:11.656607 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-neutron-metadata-combined-ca-bundle\") pod \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\" (UID: \"d5685fb2-ca69-402f-baee-9fb7b6ba6dba\") " Feb 02 13:34:11 crc kubenswrapper[4955]: I0202 13:34:11.663266 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-kube-api-access-t2ml6" (OuterVolumeSpecName: "kube-api-access-t2ml6") pod "d5685fb2-ca69-402f-baee-9fb7b6ba6dba" (UID: "d5685fb2-ca69-402f-baee-9fb7b6ba6dba"). InnerVolumeSpecName "kube-api-access-t2ml6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:34:11 crc kubenswrapper[4955]: I0202 13:34:11.671874 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "d5685fb2-ca69-402f-baee-9fb7b6ba6dba" (UID: "d5685fb2-ca69-402f-baee-9fb7b6ba6dba"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:34:11 crc kubenswrapper[4955]: I0202 13:34:11.684711 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "d5685fb2-ca69-402f-baee-9fb7b6ba6dba" (UID: "d5685fb2-ca69-402f-baee-9fb7b6ba6dba"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:34:11 crc kubenswrapper[4955]: I0202 13:34:11.685588 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-inventory" (OuterVolumeSpecName: "inventory") pod "d5685fb2-ca69-402f-baee-9fb7b6ba6dba" (UID: "d5685fb2-ca69-402f-baee-9fb7b6ba6dba"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:34:11 crc kubenswrapper[4955]: I0202 13:34:11.693253 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "d5685fb2-ca69-402f-baee-9fb7b6ba6dba" (UID: "d5685fb2-ca69-402f-baee-9fb7b6ba6dba"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:34:11 crc kubenswrapper[4955]: I0202 13:34:11.704493 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d5685fb2-ca69-402f-baee-9fb7b6ba6dba" (UID: "d5685fb2-ca69-402f-baee-9fb7b6ba6dba"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:34:11 crc kubenswrapper[4955]: I0202 13:34:11.759402 4955 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:34:11 crc kubenswrapper[4955]: I0202 13:34:11.759592 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2ml6\" (UniqueName: \"kubernetes.io/projected/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-kube-api-access-t2ml6\") on node \"crc\" DevicePath \"\"" Feb 02 13:34:11 crc kubenswrapper[4955]: I0202 13:34:11.759684 4955 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:34:11 crc kubenswrapper[4955]: I0202 13:34:11.759774 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 13:34:11 crc kubenswrapper[4955]: I0202 13:34:11.759906 4955 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:34:11 crc kubenswrapper[4955]: I0202 13:34:11.759996 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5685fb2-ca69-402f-baee-9fb7b6ba6dba-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.166402 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" event={"ID":"d5685fb2-ca69-402f-baee-9fb7b6ba6dba","Type":"ContainerDied","Data":"38cc3385cb87988c6ad2bd4a562d07ea3af91e8cdb3beaac9947b18222152213"} Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.166708 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38cc3385cb87988c6ad2bd4a562d07ea3af91e8cdb3beaac9947b18222152213" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.166446 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.262905 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg"] Feb 02 13:34:12 crc kubenswrapper[4955]: E0202 13:34:12.263534 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5685fb2-ca69-402f-baee-9fb7b6ba6dba" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.263553 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5685fb2-ca69-402f-baee-9fb7b6ba6dba" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.263825 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5685fb2-ca69-402f-baee-9fb7b6ba6dba" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.264461 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.267254 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.267692 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-65wvh" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.267886 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.268850 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.272701 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.279587 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg"] Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.372404 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg\" (UID: \"85d18e13-5f42-4f3a-841c-2e900264b6a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.372533 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg\" (UID: \"85d18e13-5f42-4f3a-841c-2e900264b6a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.372678 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg\" (UID: \"85d18e13-5f42-4f3a-841c-2e900264b6a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.372741 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f5d9\" (UniqueName: \"kubernetes.io/projected/85d18e13-5f42-4f3a-841c-2e900264b6a1-kube-api-access-9f5d9\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg\" (UID: \"85d18e13-5f42-4f3a-841c-2e900264b6a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.372827 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg\" (UID: \"85d18e13-5f42-4f3a-841c-2e900264b6a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.474493 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg\" (UID: \"85d18e13-5f42-4f3a-841c-2e900264b6a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.474623 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg\" (UID: \"85d18e13-5f42-4f3a-841c-2e900264b6a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.474658 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg\" (UID: \"85d18e13-5f42-4f3a-841c-2e900264b6a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.474738 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg\" (UID: \"85d18e13-5f42-4f3a-841c-2e900264b6a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.474763 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f5d9\" (UniqueName: \"kubernetes.io/projected/85d18e13-5f42-4f3a-841c-2e900264b6a1-kube-api-access-9f5d9\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg\" (UID: \"85d18e13-5f42-4f3a-841c-2e900264b6a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.480800 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg\" (UID: \"85d18e13-5f42-4f3a-841c-2e900264b6a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.480908 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg\" (UID: \"85d18e13-5f42-4f3a-841c-2e900264b6a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.481081 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg\" (UID: \"85d18e13-5f42-4f3a-841c-2e900264b6a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.482709 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg\" (UID: \"85d18e13-5f42-4f3a-841c-2e900264b6a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.495378 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f5d9\" (UniqueName: \"kubernetes.io/projected/85d18e13-5f42-4f3a-841c-2e900264b6a1-kube-api-access-9f5d9\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg\" (UID: \"85d18e13-5f42-4f3a-841c-2e900264b6a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" Feb 02 13:34:12 crc kubenswrapper[4955]: I0202 13:34:12.588526 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" Feb 02 13:34:13 crc kubenswrapper[4955]: I0202 13:34:13.112579 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg"] Feb 02 13:34:13 crc kubenswrapper[4955]: I0202 13:34:13.175345 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" event={"ID":"85d18e13-5f42-4f3a-841c-2e900264b6a1","Type":"ContainerStarted","Data":"ae25e248dd2a89a74385a701e1efc8a8e64a56b56268137305150a464b9ab79e"} Feb 02 13:34:14 crc kubenswrapper[4955]: I0202 13:34:14.184472 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" event={"ID":"85d18e13-5f42-4f3a-841c-2e900264b6a1","Type":"ContainerStarted","Data":"5128edb50061f5ce479367e6bef27a0e6d84972bd63b41b5cf88b7c49fc94adc"} Feb 02 13:34:14 crc kubenswrapper[4955]: I0202 13:34:14.201824 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" podStartSLOduration=1.737156452 podStartE2EDuration="2.201800543s" podCreationTimestamp="2026-02-02 13:34:12 +0000 UTC" firstStartedPulling="2026-02-02 13:34:13.118875188 +0000 UTC m=+1904.031211638" lastFinishedPulling="2026-02-02 13:34:13.583519279 +0000 UTC m=+1904.495855729" observedRunningTime="2026-02-02 13:34:14.197427974 +0000 UTC m=+1905.109764424" watchObservedRunningTime="2026-02-02 13:34:14.201800543 +0000 UTC m=+1905.114136993" Feb 02 13:35:03 crc kubenswrapper[4955]: I0202 13:35:03.016937 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:35:03 crc kubenswrapper[4955]: I0202 13:35:03.017438 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:35:33 crc kubenswrapper[4955]: I0202 13:35:33.016652 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:35:33 crc kubenswrapper[4955]: I0202 13:35:33.017069 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:36:03 crc kubenswrapper[4955]: I0202 13:36:03.016509 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:36:03 crc kubenswrapper[4955]: I0202 13:36:03.017033 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:36:03 crc kubenswrapper[4955]: I0202 13:36:03.017085 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:36:03 crc kubenswrapper[4955]: I0202 13:36:03.017786 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c539116cb659020849c2a4ade13e1e71d2570f2495d5509f8008d67246b924b4"} pod="openshift-machine-config-operator/machine-config-daemon-6l62h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:36:03 crc kubenswrapper[4955]: I0202 13:36:03.017946 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" containerID="cri-o://c539116cb659020849c2a4ade13e1e71d2570f2495d5509f8008d67246b924b4" gracePeriod=600 Feb 02 13:36:04 crc kubenswrapper[4955]: I0202 13:36:04.083474 4955 generic.go:334] "Generic (PLEG): container finished" podID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerID="c539116cb659020849c2a4ade13e1e71d2570f2495d5509f8008d67246b924b4" exitCode=0 Feb 02 13:36:04 crc kubenswrapper[4955]: I0202 13:36:04.083550 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerDied","Data":"c539116cb659020849c2a4ade13e1e71d2570f2495d5509f8008d67246b924b4"} Feb 02 13:36:04 crc kubenswrapper[4955]: I0202 13:36:04.083845 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerStarted","Data":"a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9"} Feb 02 13:36:04 crc kubenswrapper[4955]: I0202 13:36:04.083864 4955 scope.go:117] "RemoveContainer" containerID="78ed1d9d6132d2117433a49f3e49330acb447d5f7b0f18b13f346065c8c12775" Feb 02 13:37:49 crc kubenswrapper[4955]: I0202 13:37:49.952152 4955 generic.go:334] "Generic (PLEG): container finished" podID="85d18e13-5f42-4f3a-841c-2e900264b6a1" containerID="5128edb50061f5ce479367e6bef27a0e6d84972bd63b41b5cf88b7c49fc94adc" exitCode=0 Feb 02 13:37:49 crc kubenswrapper[4955]: I0202 13:37:49.952384 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" event={"ID":"85d18e13-5f42-4f3a-841c-2e900264b6a1","Type":"ContainerDied","Data":"5128edb50061f5ce479367e6bef27a0e6d84972bd63b41b5cf88b7c49fc94adc"} Feb 02 13:37:51 crc kubenswrapper[4955]: I0202 13:37:51.344491 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" Feb 02 13:37:51 crc kubenswrapper[4955]: I0202 13:37:51.487279 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-inventory\") pod \"85d18e13-5f42-4f3a-841c-2e900264b6a1\" (UID: \"85d18e13-5f42-4f3a-841c-2e900264b6a1\") " Feb 02 13:37:51 crc kubenswrapper[4955]: I0202 13:37:51.487330 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f5d9\" (UniqueName: \"kubernetes.io/projected/85d18e13-5f42-4f3a-841c-2e900264b6a1-kube-api-access-9f5d9\") pod \"85d18e13-5f42-4f3a-841c-2e900264b6a1\" (UID: \"85d18e13-5f42-4f3a-841c-2e900264b6a1\") " Feb 02 13:37:51 crc kubenswrapper[4955]: I0202 13:37:51.487402 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-ssh-key-openstack-edpm-ipam\") pod \"85d18e13-5f42-4f3a-841c-2e900264b6a1\" (UID: \"85d18e13-5f42-4f3a-841c-2e900264b6a1\") " Feb 02 13:37:51 crc kubenswrapper[4955]: I0202 13:37:51.487428 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-libvirt-secret-0\") pod \"85d18e13-5f42-4f3a-841c-2e900264b6a1\" (UID: \"85d18e13-5f42-4f3a-841c-2e900264b6a1\") " Feb 02 13:37:51 crc kubenswrapper[4955]: I0202 13:37:51.487533 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-libvirt-combined-ca-bundle\") pod \"85d18e13-5f42-4f3a-841c-2e900264b6a1\" (UID: \"85d18e13-5f42-4f3a-841c-2e900264b6a1\") " Feb 02 13:37:51 crc kubenswrapper[4955]: I0202 13:37:51.492860 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85d18e13-5f42-4f3a-841c-2e900264b6a1-kube-api-access-9f5d9" (OuterVolumeSpecName: "kube-api-access-9f5d9") pod "85d18e13-5f42-4f3a-841c-2e900264b6a1" (UID: "85d18e13-5f42-4f3a-841c-2e900264b6a1"). InnerVolumeSpecName "kube-api-access-9f5d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:37:51 crc kubenswrapper[4955]: I0202 13:37:51.492831 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "85d18e13-5f42-4f3a-841c-2e900264b6a1" (UID: "85d18e13-5f42-4f3a-841c-2e900264b6a1"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:37:51 crc kubenswrapper[4955]: I0202 13:37:51.517821 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "85d18e13-5f42-4f3a-841c-2e900264b6a1" (UID: "85d18e13-5f42-4f3a-841c-2e900264b6a1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:37:51 crc kubenswrapper[4955]: I0202 13:37:51.521751 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-inventory" (OuterVolumeSpecName: "inventory") pod "85d18e13-5f42-4f3a-841c-2e900264b6a1" (UID: "85d18e13-5f42-4f3a-841c-2e900264b6a1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:37:51 crc kubenswrapper[4955]: I0202 13:37:51.529538 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "85d18e13-5f42-4f3a-841c-2e900264b6a1" (UID: "85d18e13-5f42-4f3a-841c-2e900264b6a1"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:37:51 crc kubenswrapper[4955]: I0202 13:37:51.589925 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 13:37:51 crc kubenswrapper[4955]: I0202 13:37:51.589961 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f5d9\" (UniqueName: \"kubernetes.io/projected/85d18e13-5f42-4f3a-841c-2e900264b6a1-kube-api-access-9f5d9\") on node \"crc\" DevicePath \"\"" Feb 02 13:37:51 crc kubenswrapper[4955]: I0202 13:37:51.589973 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 13:37:51 crc kubenswrapper[4955]: I0202 13:37:51.589983 4955 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:37:51 crc kubenswrapper[4955]: I0202 13:37:51.589993 4955 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d18e13-5f42-4f3a-841c-2e900264b6a1-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:37:51 crc kubenswrapper[4955]: I0202 13:37:51.970783 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" event={"ID":"85d18e13-5f42-4f3a-841c-2e900264b6a1","Type":"ContainerDied","Data":"ae25e248dd2a89a74385a701e1efc8a8e64a56b56268137305150a464b9ab79e"} Feb 02 13:37:51 crc kubenswrapper[4955]: I0202 13:37:51.971087 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae25e248dd2a89a74385a701e1efc8a8e64a56b56268137305150a464b9ab79e" Feb 02 13:37:51 crc kubenswrapper[4955]: I0202 13:37:51.971143 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.073069 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98"] Feb 02 13:37:52 crc kubenswrapper[4955]: E0202 13:37:52.073486 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d18e13-5f42-4f3a-841c-2e900264b6a1" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.073503 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d18e13-5f42-4f3a-841c-2e900264b6a1" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.073756 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="85d18e13-5f42-4f3a-841c-2e900264b6a1" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.074343 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.077612 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.077661 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.077661 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.078148 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-65wvh" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.078259 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.078709 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.079269 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.099790 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98"] Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.099952 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.100053 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwxn2\" (UniqueName: \"kubernetes.io/projected/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-kube-api-access-gwxn2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.100164 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.100194 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.100308 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.100399 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.100461 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.100532 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.100617 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.205674 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.205722 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.205769 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.205809 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.205840 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.205876 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.205901 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.205930 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.205973 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwxn2\" (UniqueName: \"kubernetes.io/projected/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-kube-api-access-gwxn2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.210414 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.228620 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.231215 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.244171 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.249540 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwxn2\" (UniqueName: \"kubernetes.io/projected/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-kube-api-access-gwxn2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.250408 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.250930 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.256408 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.274866 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjd98\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.390792 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.976009 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98"] Feb 02 13:37:52 crc kubenswrapper[4955]: I0202 13:37:52.989285 4955 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 13:37:53 crc kubenswrapper[4955]: I0202 13:37:53.984900 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" event={"ID":"254c47a9-edbb-46bb-8c4a-72395aa8f8b0","Type":"ContainerStarted","Data":"1e38efef6851786dd9c57ed2c0f67ed7d0f5a7cf9c12c0450f76038b3e1848e9"} Feb 02 13:37:53 crc kubenswrapper[4955]: I0202 13:37:53.985403 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" event={"ID":"254c47a9-edbb-46bb-8c4a-72395aa8f8b0","Type":"ContainerStarted","Data":"604242c3d8f98387c379696a7c8311c10dd29cfac2902c8cc7873a952fd7304f"} Feb 02 13:37:54 crc kubenswrapper[4955]: I0202 13:37:54.008951 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" podStartSLOduration=1.489222716 podStartE2EDuration="2.008931355s" podCreationTimestamp="2026-02-02 13:37:52 +0000 UTC" firstStartedPulling="2026-02-02 13:37:52.989027184 +0000 UTC m=+2123.901363634" lastFinishedPulling="2026-02-02 13:37:53.508735823 +0000 UTC m=+2124.421072273" observedRunningTime="2026-02-02 13:37:54.00058497 +0000 UTC m=+2124.912921430" watchObservedRunningTime="2026-02-02 13:37:54.008931355 +0000 UTC m=+2124.921267805" Feb 02 13:38:03 crc kubenswrapper[4955]: I0202 13:38:03.016609 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:38:03 crc kubenswrapper[4955]: I0202 13:38:03.017039 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:38:33 crc kubenswrapper[4955]: I0202 13:38:33.017326 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:38:33 crc kubenswrapper[4955]: I0202 13:38:33.017855 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:38:42 crc kubenswrapper[4955]: I0202 13:38:42.012845 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pb628"] Feb 02 13:38:42 crc kubenswrapper[4955]: I0202 13:38:42.015863 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pb628" Feb 02 13:38:42 crc kubenswrapper[4955]: I0202 13:38:42.029918 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pb628"] Feb 02 13:38:42 crc kubenswrapper[4955]: I0202 13:38:42.101300 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/012fd900-12b2-4b34-8eb9-2860b0884b70-catalog-content\") pod \"redhat-operators-pb628\" (UID: \"012fd900-12b2-4b34-8eb9-2860b0884b70\") " pod="openshift-marketplace/redhat-operators-pb628" Feb 02 13:38:42 crc kubenswrapper[4955]: I0202 13:38:42.101370 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/012fd900-12b2-4b34-8eb9-2860b0884b70-utilities\") pod \"redhat-operators-pb628\" (UID: \"012fd900-12b2-4b34-8eb9-2860b0884b70\") " pod="openshift-marketplace/redhat-operators-pb628" Feb 02 13:38:42 crc kubenswrapper[4955]: I0202 13:38:42.101502 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmqqg\" (UniqueName: \"kubernetes.io/projected/012fd900-12b2-4b34-8eb9-2860b0884b70-kube-api-access-dmqqg\") pod \"redhat-operators-pb628\" (UID: \"012fd900-12b2-4b34-8eb9-2860b0884b70\") " pod="openshift-marketplace/redhat-operators-pb628" Feb 02 13:38:42 crc kubenswrapper[4955]: I0202 13:38:42.203350 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmqqg\" (UniqueName: \"kubernetes.io/projected/012fd900-12b2-4b34-8eb9-2860b0884b70-kube-api-access-dmqqg\") pod \"redhat-operators-pb628\" (UID: \"012fd900-12b2-4b34-8eb9-2860b0884b70\") " pod="openshift-marketplace/redhat-operators-pb628" Feb 02 13:38:42 crc kubenswrapper[4955]: I0202 13:38:42.203496 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/012fd900-12b2-4b34-8eb9-2860b0884b70-catalog-content\") pod \"redhat-operators-pb628\" (UID: \"012fd900-12b2-4b34-8eb9-2860b0884b70\") " pod="openshift-marketplace/redhat-operators-pb628" Feb 02 13:38:42 crc kubenswrapper[4955]: I0202 13:38:42.203525 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/012fd900-12b2-4b34-8eb9-2860b0884b70-utilities\") pod \"redhat-operators-pb628\" (UID: \"012fd900-12b2-4b34-8eb9-2860b0884b70\") " pod="openshift-marketplace/redhat-operators-pb628" Feb 02 13:38:42 crc kubenswrapper[4955]: I0202 13:38:42.204155 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/012fd900-12b2-4b34-8eb9-2860b0884b70-utilities\") pod \"redhat-operators-pb628\" (UID: \"012fd900-12b2-4b34-8eb9-2860b0884b70\") " pod="openshift-marketplace/redhat-operators-pb628" Feb 02 13:38:42 crc kubenswrapper[4955]: I0202 13:38:42.204189 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/012fd900-12b2-4b34-8eb9-2860b0884b70-catalog-content\") pod \"redhat-operators-pb628\" (UID: \"012fd900-12b2-4b34-8eb9-2860b0884b70\") " pod="openshift-marketplace/redhat-operators-pb628" Feb 02 13:38:42 crc kubenswrapper[4955]: I0202 13:38:42.240241 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmqqg\" (UniqueName: \"kubernetes.io/projected/012fd900-12b2-4b34-8eb9-2860b0884b70-kube-api-access-dmqqg\") pod \"redhat-operators-pb628\" (UID: \"012fd900-12b2-4b34-8eb9-2860b0884b70\") " pod="openshift-marketplace/redhat-operators-pb628" Feb 02 13:38:42 crc kubenswrapper[4955]: I0202 13:38:42.337075 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pb628" Feb 02 13:38:42 crc kubenswrapper[4955]: I0202 13:38:42.816120 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pb628"] Feb 02 13:38:43 crc kubenswrapper[4955]: I0202 13:38:43.511282 4955 generic.go:334] "Generic (PLEG): container finished" podID="012fd900-12b2-4b34-8eb9-2860b0884b70" containerID="e94556fe937892ac36029fa90940d6638a417834aa4192106649da03b8b58773" exitCode=0 Feb 02 13:38:43 crc kubenswrapper[4955]: I0202 13:38:43.511337 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pb628" event={"ID":"012fd900-12b2-4b34-8eb9-2860b0884b70","Type":"ContainerDied","Data":"e94556fe937892ac36029fa90940d6638a417834aa4192106649da03b8b58773"} Feb 02 13:38:43 crc kubenswrapper[4955]: I0202 13:38:43.511640 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pb628" event={"ID":"012fd900-12b2-4b34-8eb9-2860b0884b70","Type":"ContainerStarted","Data":"959e7cc83cf18b0b17044c9597d9c74efd44dfcb0cdc792b5a15045152b2d1ee"} Feb 02 13:38:45 crc kubenswrapper[4955]: I0202 13:38:45.542608 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pb628" event={"ID":"012fd900-12b2-4b34-8eb9-2860b0884b70","Type":"ContainerStarted","Data":"90ef42ba8c1d20af0f1750bbb4a557bb33a74afb198518941e776fb925497391"} Feb 02 13:38:46 crc kubenswrapper[4955]: I0202 13:38:46.552396 4955 generic.go:334] "Generic (PLEG): container finished" podID="012fd900-12b2-4b34-8eb9-2860b0884b70" containerID="90ef42ba8c1d20af0f1750bbb4a557bb33a74afb198518941e776fb925497391" exitCode=0 Feb 02 13:38:46 crc kubenswrapper[4955]: I0202 13:38:46.552496 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pb628" event={"ID":"012fd900-12b2-4b34-8eb9-2860b0884b70","Type":"ContainerDied","Data":"90ef42ba8c1d20af0f1750bbb4a557bb33a74afb198518941e776fb925497391"} Feb 02 13:38:48 crc kubenswrapper[4955]: I0202 13:38:48.577869 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pb628" event={"ID":"012fd900-12b2-4b34-8eb9-2860b0884b70","Type":"ContainerStarted","Data":"c2204cd29c348f28a8d7f9ddbfe8277009f5966ae403c6b10c05f9b765ddc212"} Feb 02 13:38:48 crc kubenswrapper[4955]: I0202 13:38:48.604937 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pb628" podStartSLOduration=4.007734942 podStartE2EDuration="7.604914038s" podCreationTimestamp="2026-02-02 13:38:41 +0000 UTC" firstStartedPulling="2026-02-02 13:38:43.513645577 +0000 UTC m=+2174.425982027" lastFinishedPulling="2026-02-02 13:38:47.110824673 +0000 UTC m=+2178.023161123" observedRunningTime="2026-02-02 13:38:48.596302167 +0000 UTC m=+2179.508638627" watchObservedRunningTime="2026-02-02 13:38:48.604914038 +0000 UTC m=+2179.517250488" Feb 02 13:38:52 crc kubenswrapper[4955]: I0202 13:38:52.337436 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pb628" Feb 02 13:38:52 crc kubenswrapper[4955]: I0202 13:38:52.338061 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pb628" Feb 02 13:38:53 crc kubenswrapper[4955]: I0202 13:38:53.384197 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pb628" podUID="012fd900-12b2-4b34-8eb9-2860b0884b70" containerName="registry-server" probeResult="failure" output=< Feb 02 13:38:53 crc kubenswrapper[4955]: timeout: failed to connect service ":50051" within 1s Feb 02 13:38:53 crc kubenswrapper[4955]: > Feb 02 13:39:02 crc kubenswrapper[4955]: I0202 13:39:02.395863 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pb628" Feb 02 13:39:02 crc kubenswrapper[4955]: I0202 13:39:02.447904 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pb628" Feb 02 13:39:02 crc kubenswrapper[4955]: I0202 13:39:02.635403 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pb628"] Feb 02 13:39:03 crc kubenswrapper[4955]: I0202 13:39:03.017258 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:39:03 crc kubenswrapper[4955]: I0202 13:39:03.019578 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:39:03 crc kubenswrapper[4955]: I0202 13:39:03.019640 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:39:03 crc kubenswrapper[4955]: I0202 13:39:03.020528 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9"} pod="openshift-machine-config-operator/machine-config-daemon-6l62h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:39:03 crc kubenswrapper[4955]: I0202 13:39:03.020618 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" containerID="cri-o://a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" gracePeriod=600 Feb 02 13:39:03 crc kubenswrapper[4955]: E0202 13:39:03.146874 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:39:03 crc kubenswrapper[4955]: I0202 13:39:03.694757 4955 generic.go:334] "Generic (PLEG): container finished" podID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" exitCode=0 Feb 02 13:39:03 crc kubenswrapper[4955]: I0202 13:39:03.694822 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerDied","Data":"a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9"} Feb 02 13:39:03 crc kubenswrapper[4955]: I0202 13:39:03.695158 4955 scope.go:117] "RemoveContainer" containerID="c539116cb659020849c2a4ade13e1e71d2570f2495d5509f8008d67246b924b4" Feb 02 13:39:03 crc kubenswrapper[4955]: I0202 13:39:03.695311 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pb628" podUID="012fd900-12b2-4b34-8eb9-2860b0884b70" containerName="registry-server" containerID="cri-o://c2204cd29c348f28a8d7f9ddbfe8277009f5966ae403c6b10c05f9b765ddc212" gracePeriod=2 Feb 02 13:39:03 crc kubenswrapper[4955]: I0202 13:39:03.696252 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:39:03 crc kubenswrapper[4955]: E0202 13:39:03.696599 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.127021 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pb628" Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.199682 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/012fd900-12b2-4b34-8eb9-2860b0884b70-catalog-content\") pod \"012fd900-12b2-4b34-8eb9-2860b0884b70\" (UID: \"012fd900-12b2-4b34-8eb9-2860b0884b70\") " Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.199856 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/012fd900-12b2-4b34-8eb9-2860b0884b70-utilities\") pod \"012fd900-12b2-4b34-8eb9-2860b0884b70\" (UID: \"012fd900-12b2-4b34-8eb9-2860b0884b70\") " Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.201017 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/012fd900-12b2-4b34-8eb9-2860b0884b70-utilities" (OuterVolumeSpecName: "utilities") pod "012fd900-12b2-4b34-8eb9-2860b0884b70" (UID: "012fd900-12b2-4b34-8eb9-2860b0884b70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.302211 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmqqg\" (UniqueName: \"kubernetes.io/projected/012fd900-12b2-4b34-8eb9-2860b0884b70-kube-api-access-dmqqg\") pod \"012fd900-12b2-4b34-8eb9-2860b0884b70\" (UID: \"012fd900-12b2-4b34-8eb9-2860b0884b70\") " Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.303342 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/012fd900-12b2-4b34-8eb9-2860b0884b70-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.308068 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/012fd900-12b2-4b34-8eb9-2860b0884b70-kube-api-access-dmqqg" (OuterVolumeSpecName: "kube-api-access-dmqqg") pod "012fd900-12b2-4b34-8eb9-2860b0884b70" (UID: "012fd900-12b2-4b34-8eb9-2860b0884b70"). InnerVolumeSpecName "kube-api-access-dmqqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.317412 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/012fd900-12b2-4b34-8eb9-2860b0884b70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "012fd900-12b2-4b34-8eb9-2860b0884b70" (UID: "012fd900-12b2-4b34-8eb9-2860b0884b70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.406173 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/012fd900-12b2-4b34-8eb9-2860b0884b70-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.406228 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmqqg\" (UniqueName: \"kubernetes.io/projected/012fd900-12b2-4b34-8eb9-2860b0884b70-kube-api-access-dmqqg\") on node \"crc\" DevicePath \"\"" Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.705290 4955 generic.go:334] "Generic (PLEG): container finished" podID="012fd900-12b2-4b34-8eb9-2860b0884b70" containerID="c2204cd29c348f28a8d7f9ddbfe8277009f5966ae403c6b10c05f9b765ddc212" exitCode=0 Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.705353 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pb628" Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.705367 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pb628" event={"ID":"012fd900-12b2-4b34-8eb9-2860b0884b70","Type":"ContainerDied","Data":"c2204cd29c348f28a8d7f9ddbfe8277009f5966ae403c6b10c05f9b765ddc212"} Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.705400 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pb628" event={"ID":"012fd900-12b2-4b34-8eb9-2860b0884b70","Type":"ContainerDied","Data":"959e7cc83cf18b0b17044c9597d9c74efd44dfcb0cdc792b5a15045152b2d1ee"} Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.705418 4955 scope.go:117] "RemoveContainer" containerID="c2204cd29c348f28a8d7f9ddbfe8277009f5966ae403c6b10c05f9b765ddc212" Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.726096 4955 scope.go:117] "RemoveContainer" containerID="90ef42ba8c1d20af0f1750bbb4a557bb33a74afb198518941e776fb925497391" Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.740114 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pb628"] Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.749194 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pb628"] Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.767900 4955 scope.go:117] "RemoveContainer" containerID="e94556fe937892ac36029fa90940d6638a417834aa4192106649da03b8b58773" Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.797902 4955 scope.go:117] "RemoveContainer" containerID="c2204cd29c348f28a8d7f9ddbfe8277009f5966ae403c6b10c05f9b765ddc212" Feb 02 13:39:04 crc kubenswrapper[4955]: E0202 13:39:04.798427 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2204cd29c348f28a8d7f9ddbfe8277009f5966ae403c6b10c05f9b765ddc212\": container with ID starting with c2204cd29c348f28a8d7f9ddbfe8277009f5966ae403c6b10c05f9b765ddc212 not found: ID does not exist" containerID="c2204cd29c348f28a8d7f9ddbfe8277009f5966ae403c6b10c05f9b765ddc212" Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.798472 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2204cd29c348f28a8d7f9ddbfe8277009f5966ae403c6b10c05f9b765ddc212"} err="failed to get container status \"c2204cd29c348f28a8d7f9ddbfe8277009f5966ae403c6b10c05f9b765ddc212\": rpc error: code = NotFound desc = could not find container \"c2204cd29c348f28a8d7f9ddbfe8277009f5966ae403c6b10c05f9b765ddc212\": container with ID starting with c2204cd29c348f28a8d7f9ddbfe8277009f5966ae403c6b10c05f9b765ddc212 not found: ID does not exist" Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.798500 4955 scope.go:117] "RemoveContainer" containerID="90ef42ba8c1d20af0f1750bbb4a557bb33a74afb198518941e776fb925497391" Feb 02 13:39:04 crc kubenswrapper[4955]: E0202 13:39:04.799022 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90ef42ba8c1d20af0f1750bbb4a557bb33a74afb198518941e776fb925497391\": container with ID starting with 90ef42ba8c1d20af0f1750bbb4a557bb33a74afb198518941e776fb925497391 not found: ID does not exist" containerID="90ef42ba8c1d20af0f1750bbb4a557bb33a74afb198518941e776fb925497391" Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.799055 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90ef42ba8c1d20af0f1750bbb4a557bb33a74afb198518941e776fb925497391"} err="failed to get container status \"90ef42ba8c1d20af0f1750bbb4a557bb33a74afb198518941e776fb925497391\": rpc error: code = NotFound desc = could not find container \"90ef42ba8c1d20af0f1750bbb4a557bb33a74afb198518941e776fb925497391\": container with ID starting with 90ef42ba8c1d20af0f1750bbb4a557bb33a74afb198518941e776fb925497391 not found: ID does not exist" Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.799072 4955 scope.go:117] "RemoveContainer" containerID="e94556fe937892ac36029fa90940d6638a417834aa4192106649da03b8b58773" Feb 02 13:39:04 crc kubenswrapper[4955]: E0202 13:39:04.799338 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e94556fe937892ac36029fa90940d6638a417834aa4192106649da03b8b58773\": container with ID starting with e94556fe937892ac36029fa90940d6638a417834aa4192106649da03b8b58773 not found: ID does not exist" containerID="e94556fe937892ac36029fa90940d6638a417834aa4192106649da03b8b58773" Feb 02 13:39:04 crc kubenswrapper[4955]: I0202 13:39:04.799370 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e94556fe937892ac36029fa90940d6638a417834aa4192106649da03b8b58773"} err="failed to get container status \"e94556fe937892ac36029fa90940d6638a417834aa4192106649da03b8b58773\": rpc error: code = NotFound desc = could not find container \"e94556fe937892ac36029fa90940d6638a417834aa4192106649da03b8b58773\": container with ID starting with e94556fe937892ac36029fa90940d6638a417834aa4192106649da03b8b58773 not found: ID does not exist" Feb 02 13:39:05 crc kubenswrapper[4955]: I0202 13:39:05.727264 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="012fd900-12b2-4b34-8eb9-2860b0884b70" path="/var/lib/kubelet/pods/012fd900-12b2-4b34-8eb9-2860b0884b70/volumes" Feb 02 13:39:16 crc kubenswrapper[4955]: I0202 13:39:16.717229 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:39:16 crc kubenswrapper[4955]: E0202 13:39:16.718022 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:39:23 crc kubenswrapper[4955]: I0202 13:39:23.884953 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d5hqg"] Feb 02 13:39:23 crc kubenswrapper[4955]: E0202 13:39:23.887286 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012fd900-12b2-4b34-8eb9-2860b0884b70" containerName="extract-content" Feb 02 13:39:23 crc kubenswrapper[4955]: I0202 13:39:23.887337 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="012fd900-12b2-4b34-8eb9-2860b0884b70" containerName="extract-content" Feb 02 13:39:23 crc kubenswrapper[4955]: E0202 13:39:23.887384 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012fd900-12b2-4b34-8eb9-2860b0884b70" containerName="extract-utilities" Feb 02 13:39:23 crc kubenswrapper[4955]: I0202 13:39:23.887504 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="012fd900-12b2-4b34-8eb9-2860b0884b70" containerName="extract-utilities" Feb 02 13:39:23 crc kubenswrapper[4955]: E0202 13:39:23.887527 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012fd900-12b2-4b34-8eb9-2860b0884b70" containerName="registry-server" Feb 02 13:39:23 crc kubenswrapper[4955]: I0202 13:39:23.887537 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="012fd900-12b2-4b34-8eb9-2860b0884b70" containerName="registry-server" Feb 02 13:39:23 crc kubenswrapper[4955]: I0202 13:39:23.888040 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="012fd900-12b2-4b34-8eb9-2860b0884b70" containerName="registry-server" Feb 02 13:39:23 crc kubenswrapper[4955]: I0202 13:39:23.898941 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d5hqg" Feb 02 13:39:23 crc kubenswrapper[4955]: I0202 13:39:23.913723 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d5hqg"] Feb 02 13:39:23 crc kubenswrapper[4955]: I0202 13:39:23.980696 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rwcp\" (UniqueName: \"kubernetes.io/projected/452e60e3-6e31-4a43-9b1f-a79af3c33099-kube-api-access-2rwcp\") pod \"certified-operators-d5hqg\" (UID: \"452e60e3-6e31-4a43-9b1f-a79af3c33099\") " pod="openshift-marketplace/certified-operators-d5hqg" Feb 02 13:39:23 crc kubenswrapper[4955]: I0202 13:39:23.980971 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/452e60e3-6e31-4a43-9b1f-a79af3c33099-catalog-content\") pod \"certified-operators-d5hqg\" (UID: \"452e60e3-6e31-4a43-9b1f-a79af3c33099\") " pod="openshift-marketplace/certified-operators-d5hqg" Feb 02 13:39:23 crc kubenswrapper[4955]: I0202 13:39:23.981090 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/452e60e3-6e31-4a43-9b1f-a79af3c33099-utilities\") pod \"certified-operators-d5hqg\" (UID: \"452e60e3-6e31-4a43-9b1f-a79af3c33099\") " pod="openshift-marketplace/certified-operators-d5hqg" Feb 02 13:39:24 crc kubenswrapper[4955]: I0202 13:39:24.083021 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/452e60e3-6e31-4a43-9b1f-a79af3c33099-utilities\") pod \"certified-operators-d5hqg\" (UID: \"452e60e3-6e31-4a43-9b1f-a79af3c33099\") " pod="openshift-marketplace/certified-operators-d5hqg" Feb 02 13:39:24 crc kubenswrapper[4955]: I0202 13:39:24.083481 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rwcp\" (UniqueName: \"kubernetes.io/projected/452e60e3-6e31-4a43-9b1f-a79af3c33099-kube-api-access-2rwcp\") pod \"certified-operators-d5hqg\" (UID: \"452e60e3-6e31-4a43-9b1f-a79af3c33099\") " pod="openshift-marketplace/certified-operators-d5hqg" Feb 02 13:39:24 crc kubenswrapper[4955]: I0202 13:39:24.083701 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/452e60e3-6e31-4a43-9b1f-a79af3c33099-catalog-content\") pod \"certified-operators-d5hqg\" (UID: \"452e60e3-6e31-4a43-9b1f-a79af3c33099\") " pod="openshift-marketplace/certified-operators-d5hqg" Feb 02 13:39:24 crc kubenswrapper[4955]: I0202 13:39:24.084309 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/452e60e3-6e31-4a43-9b1f-a79af3c33099-catalog-content\") pod \"certified-operators-d5hqg\" (UID: \"452e60e3-6e31-4a43-9b1f-a79af3c33099\") " pod="openshift-marketplace/certified-operators-d5hqg" Feb 02 13:39:24 crc kubenswrapper[4955]: I0202 13:39:24.084673 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/452e60e3-6e31-4a43-9b1f-a79af3c33099-utilities\") pod \"certified-operators-d5hqg\" (UID: \"452e60e3-6e31-4a43-9b1f-a79af3c33099\") " pod="openshift-marketplace/certified-operators-d5hqg" Feb 02 13:39:24 crc kubenswrapper[4955]: I0202 13:39:24.111306 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rwcp\" (UniqueName: \"kubernetes.io/projected/452e60e3-6e31-4a43-9b1f-a79af3c33099-kube-api-access-2rwcp\") pod \"certified-operators-d5hqg\" (UID: \"452e60e3-6e31-4a43-9b1f-a79af3c33099\") " pod="openshift-marketplace/certified-operators-d5hqg" Feb 02 13:39:24 crc kubenswrapper[4955]: I0202 13:39:24.223690 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d5hqg" Feb 02 13:39:24 crc kubenswrapper[4955]: I0202 13:39:24.766104 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d5hqg"] Feb 02 13:39:24 crc kubenswrapper[4955]: W0202 13:39:24.778685 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod452e60e3_6e31_4a43_9b1f_a79af3c33099.slice/crio-ed6925990322ed082b454a2130258dbd895a653e772eb64993c5aff30129e5bb WatchSource:0}: Error finding container ed6925990322ed082b454a2130258dbd895a653e772eb64993c5aff30129e5bb: Status 404 returned error can't find the container with id ed6925990322ed082b454a2130258dbd895a653e772eb64993c5aff30129e5bb Feb 02 13:39:24 crc kubenswrapper[4955]: I0202 13:39:24.867023 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5hqg" event={"ID":"452e60e3-6e31-4a43-9b1f-a79af3c33099","Type":"ContainerStarted","Data":"ed6925990322ed082b454a2130258dbd895a653e772eb64993c5aff30129e5bb"} Feb 02 13:39:25 crc kubenswrapper[4955]: I0202 13:39:25.883349 4955 generic.go:334] "Generic (PLEG): container finished" podID="452e60e3-6e31-4a43-9b1f-a79af3c33099" containerID="0beee4eb863b1fd1220f528fd79d09a125a24830ac3e190c62cb26bc4c514ef2" exitCode=0 Feb 02 13:39:25 crc kubenswrapper[4955]: I0202 13:39:25.883421 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5hqg" event={"ID":"452e60e3-6e31-4a43-9b1f-a79af3c33099","Type":"ContainerDied","Data":"0beee4eb863b1fd1220f528fd79d09a125a24830ac3e190c62cb26bc4c514ef2"} Feb 02 13:39:26 crc kubenswrapper[4955]: I0202 13:39:26.893112 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5hqg" event={"ID":"452e60e3-6e31-4a43-9b1f-a79af3c33099","Type":"ContainerStarted","Data":"b0bf096a9c358f720ed548c265f0ed3eff6e41d4da87f70a17c9d2fa5c011b44"} Feb 02 13:39:27 crc kubenswrapper[4955]: I0202 13:39:27.903758 4955 generic.go:334] "Generic (PLEG): container finished" podID="452e60e3-6e31-4a43-9b1f-a79af3c33099" containerID="b0bf096a9c358f720ed548c265f0ed3eff6e41d4da87f70a17c9d2fa5c011b44" exitCode=0 Feb 02 13:39:27 crc kubenswrapper[4955]: I0202 13:39:27.903831 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5hqg" event={"ID":"452e60e3-6e31-4a43-9b1f-a79af3c33099","Type":"ContainerDied","Data":"b0bf096a9c358f720ed548c265f0ed3eff6e41d4da87f70a17c9d2fa5c011b44"} Feb 02 13:39:28 crc kubenswrapper[4955]: I0202 13:39:28.914814 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5hqg" event={"ID":"452e60e3-6e31-4a43-9b1f-a79af3c33099","Type":"ContainerStarted","Data":"2846b7b4805cdac7c56d04c97b96ba38caf724fa50fe21787bfb53e196f92e1f"} Feb 02 13:39:28 crc kubenswrapper[4955]: I0202 13:39:28.940406 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d5hqg" podStartSLOduration=3.221251872 podStartE2EDuration="5.940381702s" podCreationTimestamp="2026-02-02 13:39:23 +0000 UTC" firstStartedPulling="2026-02-02 13:39:25.886026071 +0000 UTC m=+2216.798362521" lastFinishedPulling="2026-02-02 13:39:28.605155901 +0000 UTC m=+2219.517492351" observedRunningTime="2026-02-02 13:39:28.930766376 +0000 UTC m=+2219.843102826" watchObservedRunningTime="2026-02-02 13:39:28.940381702 +0000 UTC m=+2219.852718152" Feb 02 13:39:29 crc kubenswrapper[4955]: I0202 13:39:29.723983 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:39:29 crc kubenswrapper[4955]: E0202 13:39:29.724592 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:39:31 crc kubenswrapper[4955]: I0202 13:39:31.053511 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d9hwb"] Feb 02 13:39:31 crc kubenswrapper[4955]: I0202 13:39:31.055635 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d9hwb" Feb 02 13:39:31 crc kubenswrapper[4955]: I0202 13:39:31.114603 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57aa5139-68f0-4aee-954a-eafc5eb4d136-catalog-content\") pod \"community-operators-d9hwb\" (UID: \"57aa5139-68f0-4aee-954a-eafc5eb4d136\") " pod="openshift-marketplace/community-operators-d9hwb" Feb 02 13:39:31 crc kubenswrapper[4955]: I0202 13:39:31.114678 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkmkh\" (UniqueName: \"kubernetes.io/projected/57aa5139-68f0-4aee-954a-eafc5eb4d136-kube-api-access-bkmkh\") pod \"community-operators-d9hwb\" (UID: \"57aa5139-68f0-4aee-954a-eafc5eb4d136\") " pod="openshift-marketplace/community-operators-d9hwb" Feb 02 13:39:31 crc kubenswrapper[4955]: I0202 13:39:31.114729 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57aa5139-68f0-4aee-954a-eafc5eb4d136-utilities\") pod \"community-operators-d9hwb\" (UID: \"57aa5139-68f0-4aee-954a-eafc5eb4d136\") " pod="openshift-marketplace/community-operators-d9hwb" Feb 02 13:39:31 crc kubenswrapper[4955]: I0202 13:39:31.114910 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d9hwb"] Feb 02 13:39:31 crc kubenswrapper[4955]: I0202 13:39:31.218760 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57aa5139-68f0-4aee-954a-eafc5eb4d136-catalog-content\") pod \"community-operators-d9hwb\" (UID: \"57aa5139-68f0-4aee-954a-eafc5eb4d136\") " pod="openshift-marketplace/community-operators-d9hwb" Feb 02 13:39:31 crc kubenswrapper[4955]: I0202 13:39:31.219092 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkmkh\" (UniqueName: \"kubernetes.io/projected/57aa5139-68f0-4aee-954a-eafc5eb4d136-kube-api-access-bkmkh\") pod \"community-operators-d9hwb\" (UID: \"57aa5139-68f0-4aee-954a-eafc5eb4d136\") " pod="openshift-marketplace/community-operators-d9hwb" Feb 02 13:39:31 crc kubenswrapper[4955]: I0202 13:39:31.219149 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57aa5139-68f0-4aee-954a-eafc5eb4d136-utilities\") pod \"community-operators-d9hwb\" (UID: \"57aa5139-68f0-4aee-954a-eafc5eb4d136\") " pod="openshift-marketplace/community-operators-d9hwb" Feb 02 13:39:31 crc kubenswrapper[4955]: I0202 13:39:31.219655 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57aa5139-68f0-4aee-954a-eafc5eb4d136-utilities\") pod \"community-operators-d9hwb\" (UID: \"57aa5139-68f0-4aee-954a-eafc5eb4d136\") " pod="openshift-marketplace/community-operators-d9hwb" Feb 02 13:39:31 crc kubenswrapper[4955]: I0202 13:39:31.219871 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57aa5139-68f0-4aee-954a-eafc5eb4d136-catalog-content\") pod \"community-operators-d9hwb\" (UID: \"57aa5139-68f0-4aee-954a-eafc5eb4d136\") " pod="openshift-marketplace/community-operators-d9hwb" Feb 02 13:39:31 crc kubenswrapper[4955]: I0202 13:39:31.251013 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkmkh\" (UniqueName: \"kubernetes.io/projected/57aa5139-68f0-4aee-954a-eafc5eb4d136-kube-api-access-bkmkh\") pod \"community-operators-d9hwb\" (UID: \"57aa5139-68f0-4aee-954a-eafc5eb4d136\") " pod="openshift-marketplace/community-operators-d9hwb" Feb 02 13:39:31 crc kubenswrapper[4955]: I0202 13:39:31.392838 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d9hwb" Feb 02 13:39:31 crc kubenswrapper[4955]: I0202 13:39:31.932728 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d9hwb"] Feb 02 13:39:32 crc kubenswrapper[4955]: I0202 13:39:32.946969 4955 generic.go:334] "Generic (PLEG): container finished" podID="57aa5139-68f0-4aee-954a-eafc5eb4d136" containerID="964df5bbe776e160da7de25d71ff9a08e5ee9ed052cfc8f8391ff7148f35e857" exitCode=0 Feb 02 13:39:32 crc kubenswrapper[4955]: I0202 13:39:32.947063 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d9hwb" event={"ID":"57aa5139-68f0-4aee-954a-eafc5eb4d136","Type":"ContainerDied","Data":"964df5bbe776e160da7de25d71ff9a08e5ee9ed052cfc8f8391ff7148f35e857"} Feb 02 13:39:32 crc kubenswrapper[4955]: I0202 13:39:32.947300 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d9hwb" event={"ID":"57aa5139-68f0-4aee-954a-eafc5eb4d136","Type":"ContainerStarted","Data":"04b437d57f50a07b86700df76f051c12c12bf7ecd197f4302ed239d99a4f53ea"} Feb 02 13:39:33 crc kubenswrapper[4955]: I0202 13:39:33.956841 4955 generic.go:334] "Generic (PLEG): container finished" podID="57aa5139-68f0-4aee-954a-eafc5eb4d136" containerID="a0a1ef521fa4dd2273048ee5c2ebb5472dc6134d0fea744beb9b5abd110ad023" exitCode=0 Feb 02 13:39:33 crc kubenswrapper[4955]: I0202 13:39:33.957034 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d9hwb" event={"ID":"57aa5139-68f0-4aee-954a-eafc5eb4d136","Type":"ContainerDied","Data":"a0a1ef521fa4dd2273048ee5c2ebb5472dc6134d0fea744beb9b5abd110ad023"} Feb 02 13:39:34 crc kubenswrapper[4955]: I0202 13:39:34.223997 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d5hqg" Feb 02 13:39:34 crc kubenswrapper[4955]: I0202 13:39:34.224286 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d5hqg" Feb 02 13:39:34 crc kubenswrapper[4955]: I0202 13:39:34.270941 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d5hqg" Feb 02 13:39:34 crc kubenswrapper[4955]: I0202 13:39:34.966032 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d9hwb" event={"ID":"57aa5139-68f0-4aee-954a-eafc5eb4d136","Type":"ContainerStarted","Data":"d0e514c2c9bb24385862f9ddabe9140245c276c3ba9d74baf825a12adbb004ab"} Feb 02 13:39:34 crc kubenswrapper[4955]: I0202 13:39:34.988522 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d9hwb" podStartSLOduration=2.50979847 podStartE2EDuration="3.988504038s" podCreationTimestamp="2026-02-02 13:39:31 +0000 UTC" firstStartedPulling="2026-02-02 13:39:32.948902532 +0000 UTC m=+2223.861238982" lastFinishedPulling="2026-02-02 13:39:34.4276081 +0000 UTC m=+2225.339944550" observedRunningTime="2026-02-02 13:39:34.982024989 +0000 UTC m=+2225.894361469" watchObservedRunningTime="2026-02-02 13:39:34.988504038 +0000 UTC m=+2225.900840488" Feb 02 13:39:35 crc kubenswrapper[4955]: I0202 13:39:35.013738 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d5hqg" Feb 02 13:39:36 crc kubenswrapper[4955]: I0202 13:39:36.653455 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d5hqg"] Feb 02 13:39:36 crc kubenswrapper[4955]: I0202 13:39:36.980482 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d5hqg" podUID="452e60e3-6e31-4a43-9b1f-a79af3c33099" containerName="registry-server" containerID="cri-o://2846b7b4805cdac7c56d04c97b96ba38caf724fa50fe21787bfb53e196f92e1f" gracePeriod=2 Feb 02 13:39:37 crc kubenswrapper[4955]: I0202 13:39:37.409968 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d5hqg" Feb 02 13:39:37 crc kubenswrapper[4955]: I0202 13:39:37.563280 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/452e60e3-6e31-4a43-9b1f-a79af3c33099-utilities\") pod \"452e60e3-6e31-4a43-9b1f-a79af3c33099\" (UID: \"452e60e3-6e31-4a43-9b1f-a79af3c33099\") " Feb 02 13:39:37 crc kubenswrapper[4955]: I0202 13:39:37.563523 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/452e60e3-6e31-4a43-9b1f-a79af3c33099-catalog-content\") pod \"452e60e3-6e31-4a43-9b1f-a79af3c33099\" (UID: \"452e60e3-6e31-4a43-9b1f-a79af3c33099\") " Feb 02 13:39:37 crc kubenswrapper[4955]: I0202 13:39:37.563575 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rwcp\" (UniqueName: \"kubernetes.io/projected/452e60e3-6e31-4a43-9b1f-a79af3c33099-kube-api-access-2rwcp\") pod \"452e60e3-6e31-4a43-9b1f-a79af3c33099\" (UID: \"452e60e3-6e31-4a43-9b1f-a79af3c33099\") " Feb 02 13:39:37 crc kubenswrapper[4955]: I0202 13:39:37.565163 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/452e60e3-6e31-4a43-9b1f-a79af3c33099-utilities" (OuterVolumeSpecName: "utilities") pod "452e60e3-6e31-4a43-9b1f-a79af3c33099" (UID: "452e60e3-6e31-4a43-9b1f-a79af3c33099"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:39:37 crc kubenswrapper[4955]: I0202 13:39:37.571370 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/452e60e3-6e31-4a43-9b1f-a79af3c33099-kube-api-access-2rwcp" (OuterVolumeSpecName: "kube-api-access-2rwcp") pod "452e60e3-6e31-4a43-9b1f-a79af3c33099" (UID: "452e60e3-6e31-4a43-9b1f-a79af3c33099"). InnerVolumeSpecName "kube-api-access-2rwcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:39:37 crc kubenswrapper[4955]: I0202 13:39:37.616148 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/452e60e3-6e31-4a43-9b1f-a79af3c33099-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "452e60e3-6e31-4a43-9b1f-a79af3c33099" (UID: "452e60e3-6e31-4a43-9b1f-a79af3c33099"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:39:37 crc kubenswrapper[4955]: I0202 13:39:37.665706 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/452e60e3-6e31-4a43-9b1f-a79af3c33099-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:39:37 crc kubenswrapper[4955]: I0202 13:39:37.665739 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rwcp\" (UniqueName: \"kubernetes.io/projected/452e60e3-6e31-4a43-9b1f-a79af3c33099-kube-api-access-2rwcp\") on node \"crc\" DevicePath \"\"" Feb 02 13:39:37 crc kubenswrapper[4955]: I0202 13:39:37.665750 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/452e60e3-6e31-4a43-9b1f-a79af3c33099-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:39:37 crc kubenswrapper[4955]: I0202 13:39:37.993649 4955 generic.go:334] "Generic (PLEG): container finished" podID="452e60e3-6e31-4a43-9b1f-a79af3c33099" containerID="2846b7b4805cdac7c56d04c97b96ba38caf724fa50fe21787bfb53e196f92e1f" exitCode=0 Feb 02 13:39:37 crc kubenswrapper[4955]: I0202 13:39:37.993710 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5hqg" event={"ID":"452e60e3-6e31-4a43-9b1f-a79af3c33099","Type":"ContainerDied","Data":"2846b7b4805cdac7c56d04c97b96ba38caf724fa50fe21787bfb53e196f92e1f"} Feb 02 13:39:37 crc kubenswrapper[4955]: I0202 13:39:37.993733 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d5hqg" Feb 02 13:39:37 crc kubenswrapper[4955]: I0202 13:39:37.993757 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5hqg" event={"ID":"452e60e3-6e31-4a43-9b1f-a79af3c33099","Type":"ContainerDied","Data":"ed6925990322ed082b454a2130258dbd895a653e772eb64993c5aff30129e5bb"} Feb 02 13:39:37 crc kubenswrapper[4955]: I0202 13:39:37.993784 4955 scope.go:117] "RemoveContainer" containerID="2846b7b4805cdac7c56d04c97b96ba38caf724fa50fe21787bfb53e196f92e1f" Feb 02 13:39:38 crc kubenswrapper[4955]: I0202 13:39:38.016272 4955 scope.go:117] "RemoveContainer" containerID="b0bf096a9c358f720ed548c265f0ed3eff6e41d4da87f70a17c9d2fa5c011b44" Feb 02 13:39:38 crc kubenswrapper[4955]: I0202 13:39:38.019538 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d5hqg"] Feb 02 13:39:38 crc kubenswrapper[4955]: I0202 13:39:38.029747 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d5hqg"] Feb 02 13:39:38 crc kubenswrapper[4955]: I0202 13:39:38.040777 4955 scope.go:117] "RemoveContainer" containerID="0beee4eb863b1fd1220f528fd79d09a125a24830ac3e190c62cb26bc4c514ef2" Feb 02 13:39:38 crc kubenswrapper[4955]: I0202 13:39:38.080271 4955 scope.go:117] "RemoveContainer" containerID="2846b7b4805cdac7c56d04c97b96ba38caf724fa50fe21787bfb53e196f92e1f" Feb 02 13:39:38 crc kubenswrapper[4955]: E0202 13:39:38.080644 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2846b7b4805cdac7c56d04c97b96ba38caf724fa50fe21787bfb53e196f92e1f\": container with ID starting with 2846b7b4805cdac7c56d04c97b96ba38caf724fa50fe21787bfb53e196f92e1f not found: ID does not exist" containerID="2846b7b4805cdac7c56d04c97b96ba38caf724fa50fe21787bfb53e196f92e1f" Feb 02 13:39:38 crc kubenswrapper[4955]: I0202 13:39:38.080754 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2846b7b4805cdac7c56d04c97b96ba38caf724fa50fe21787bfb53e196f92e1f"} err="failed to get container status \"2846b7b4805cdac7c56d04c97b96ba38caf724fa50fe21787bfb53e196f92e1f\": rpc error: code = NotFound desc = could not find container \"2846b7b4805cdac7c56d04c97b96ba38caf724fa50fe21787bfb53e196f92e1f\": container with ID starting with 2846b7b4805cdac7c56d04c97b96ba38caf724fa50fe21787bfb53e196f92e1f not found: ID does not exist" Feb 02 13:39:38 crc kubenswrapper[4955]: I0202 13:39:38.080782 4955 scope.go:117] "RemoveContainer" containerID="b0bf096a9c358f720ed548c265f0ed3eff6e41d4da87f70a17c9d2fa5c011b44" Feb 02 13:39:38 crc kubenswrapper[4955]: E0202 13:39:38.080977 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0bf096a9c358f720ed548c265f0ed3eff6e41d4da87f70a17c9d2fa5c011b44\": container with ID starting with b0bf096a9c358f720ed548c265f0ed3eff6e41d4da87f70a17c9d2fa5c011b44 not found: ID does not exist" containerID="b0bf096a9c358f720ed548c265f0ed3eff6e41d4da87f70a17c9d2fa5c011b44" Feb 02 13:39:38 crc kubenswrapper[4955]: I0202 13:39:38.081001 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0bf096a9c358f720ed548c265f0ed3eff6e41d4da87f70a17c9d2fa5c011b44"} err="failed to get container status \"b0bf096a9c358f720ed548c265f0ed3eff6e41d4da87f70a17c9d2fa5c011b44\": rpc error: code = NotFound desc = could not find container \"b0bf096a9c358f720ed548c265f0ed3eff6e41d4da87f70a17c9d2fa5c011b44\": container with ID starting with b0bf096a9c358f720ed548c265f0ed3eff6e41d4da87f70a17c9d2fa5c011b44 not found: ID does not exist" Feb 02 13:39:38 crc kubenswrapper[4955]: I0202 13:39:38.081015 4955 scope.go:117] "RemoveContainer" containerID="0beee4eb863b1fd1220f528fd79d09a125a24830ac3e190c62cb26bc4c514ef2" Feb 02 13:39:38 crc kubenswrapper[4955]: E0202 13:39:38.081284 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0beee4eb863b1fd1220f528fd79d09a125a24830ac3e190c62cb26bc4c514ef2\": container with ID starting with 0beee4eb863b1fd1220f528fd79d09a125a24830ac3e190c62cb26bc4c514ef2 not found: ID does not exist" containerID="0beee4eb863b1fd1220f528fd79d09a125a24830ac3e190c62cb26bc4c514ef2" Feb 02 13:39:38 crc kubenswrapper[4955]: I0202 13:39:38.081315 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0beee4eb863b1fd1220f528fd79d09a125a24830ac3e190c62cb26bc4c514ef2"} err="failed to get container status \"0beee4eb863b1fd1220f528fd79d09a125a24830ac3e190c62cb26bc4c514ef2\": rpc error: code = NotFound desc = could not find container \"0beee4eb863b1fd1220f528fd79d09a125a24830ac3e190c62cb26bc4c514ef2\": container with ID starting with 0beee4eb863b1fd1220f528fd79d09a125a24830ac3e190c62cb26bc4c514ef2 not found: ID does not exist" Feb 02 13:39:39 crc kubenswrapper[4955]: I0202 13:39:39.726866 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="452e60e3-6e31-4a43-9b1f-a79af3c33099" path="/var/lib/kubelet/pods/452e60e3-6e31-4a43-9b1f-a79af3c33099/volumes" Feb 02 13:39:41 crc kubenswrapper[4955]: I0202 13:39:41.394706 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d9hwb" Feb 02 13:39:41 crc kubenswrapper[4955]: I0202 13:39:41.395011 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d9hwb" Feb 02 13:39:41 crc kubenswrapper[4955]: I0202 13:39:41.439167 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d9hwb" Feb 02 13:39:42 crc kubenswrapper[4955]: I0202 13:39:42.069228 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d9hwb" Feb 02 13:39:42 crc kubenswrapper[4955]: I0202 13:39:42.119592 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d9hwb"] Feb 02 13:39:42 crc kubenswrapper[4955]: I0202 13:39:42.716947 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:39:42 crc kubenswrapper[4955]: E0202 13:39:42.717642 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:39:44 crc kubenswrapper[4955]: I0202 13:39:44.045115 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d9hwb" podUID="57aa5139-68f0-4aee-954a-eafc5eb4d136" containerName="registry-server" containerID="cri-o://d0e514c2c9bb24385862f9ddabe9140245c276c3ba9d74baf825a12adbb004ab" gracePeriod=2 Feb 02 13:39:45 crc kubenswrapper[4955]: I0202 13:39:45.058440 4955 generic.go:334] "Generic (PLEG): container finished" podID="57aa5139-68f0-4aee-954a-eafc5eb4d136" containerID="d0e514c2c9bb24385862f9ddabe9140245c276c3ba9d74baf825a12adbb004ab" exitCode=0 Feb 02 13:39:45 crc kubenswrapper[4955]: I0202 13:39:45.058536 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d9hwb" event={"ID":"57aa5139-68f0-4aee-954a-eafc5eb4d136","Type":"ContainerDied","Data":"d0e514c2c9bb24385862f9ddabe9140245c276c3ba9d74baf825a12adbb004ab"} Feb 02 13:39:45 crc kubenswrapper[4955]: I0202 13:39:45.198009 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d9hwb" Feb 02 13:39:45 crc kubenswrapper[4955]: I0202 13:39:45.316528 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57aa5139-68f0-4aee-954a-eafc5eb4d136-catalog-content\") pod \"57aa5139-68f0-4aee-954a-eafc5eb4d136\" (UID: \"57aa5139-68f0-4aee-954a-eafc5eb4d136\") " Feb 02 13:39:45 crc kubenswrapper[4955]: I0202 13:39:45.316952 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57aa5139-68f0-4aee-954a-eafc5eb4d136-utilities\") pod \"57aa5139-68f0-4aee-954a-eafc5eb4d136\" (UID: \"57aa5139-68f0-4aee-954a-eafc5eb4d136\") " Feb 02 13:39:45 crc kubenswrapper[4955]: I0202 13:39:45.317005 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkmkh\" (UniqueName: \"kubernetes.io/projected/57aa5139-68f0-4aee-954a-eafc5eb4d136-kube-api-access-bkmkh\") pod \"57aa5139-68f0-4aee-954a-eafc5eb4d136\" (UID: \"57aa5139-68f0-4aee-954a-eafc5eb4d136\") " Feb 02 13:39:45 crc kubenswrapper[4955]: I0202 13:39:45.318014 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57aa5139-68f0-4aee-954a-eafc5eb4d136-utilities" (OuterVolumeSpecName: "utilities") pod "57aa5139-68f0-4aee-954a-eafc5eb4d136" (UID: "57aa5139-68f0-4aee-954a-eafc5eb4d136"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:39:45 crc kubenswrapper[4955]: I0202 13:39:45.326212 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57aa5139-68f0-4aee-954a-eafc5eb4d136-kube-api-access-bkmkh" (OuterVolumeSpecName: "kube-api-access-bkmkh") pod "57aa5139-68f0-4aee-954a-eafc5eb4d136" (UID: "57aa5139-68f0-4aee-954a-eafc5eb4d136"). InnerVolumeSpecName "kube-api-access-bkmkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:39:45 crc kubenswrapper[4955]: I0202 13:39:45.372541 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57aa5139-68f0-4aee-954a-eafc5eb4d136-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57aa5139-68f0-4aee-954a-eafc5eb4d136" (UID: "57aa5139-68f0-4aee-954a-eafc5eb4d136"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:39:45 crc kubenswrapper[4955]: I0202 13:39:45.419773 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57aa5139-68f0-4aee-954a-eafc5eb4d136-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:39:45 crc kubenswrapper[4955]: I0202 13:39:45.419807 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkmkh\" (UniqueName: \"kubernetes.io/projected/57aa5139-68f0-4aee-954a-eafc5eb4d136-kube-api-access-bkmkh\") on node \"crc\" DevicePath \"\"" Feb 02 13:39:45 crc kubenswrapper[4955]: I0202 13:39:45.419817 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57aa5139-68f0-4aee-954a-eafc5eb4d136-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:39:46 crc kubenswrapper[4955]: I0202 13:39:46.073763 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d9hwb" event={"ID":"57aa5139-68f0-4aee-954a-eafc5eb4d136","Type":"ContainerDied","Data":"04b437d57f50a07b86700df76f051c12c12bf7ecd197f4302ed239d99a4f53ea"} Feb 02 13:39:46 crc kubenswrapper[4955]: I0202 13:39:46.073887 4955 scope.go:117] "RemoveContainer" containerID="d0e514c2c9bb24385862f9ddabe9140245c276c3ba9d74baf825a12adbb004ab" Feb 02 13:39:46 crc kubenswrapper[4955]: I0202 13:39:46.073799 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d9hwb" Feb 02 13:39:46 crc kubenswrapper[4955]: I0202 13:39:46.103126 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d9hwb"] Feb 02 13:39:46 crc kubenswrapper[4955]: I0202 13:39:46.107191 4955 scope.go:117] "RemoveContainer" containerID="a0a1ef521fa4dd2273048ee5c2ebb5472dc6134d0fea744beb9b5abd110ad023" Feb 02 13:39:46 crc kubenswrapper[4955]: I0202 13:39:46.113796 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d9hwb"] Feb 02 13:39:46 crc kubenswrapper[4955]: I0202 13:39:46.136217 4955 scope.go:117] "RemoveContainer" containerID="964df5bbe776e160da7de25d71ff9a08e5ee9ed052cfc8f8391ff7148f35e857" Feb 02 13:39:47 crc kubenswrapper[4955]: I0202 13:39:47.728175 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57aa5139-68f0-4aee-954a-eafc5eb4d136" path="/var/lib/kubelet/pods/57aa5139-68f0-4aee-954a-eafc5eb4d136/volumes" Feb 02 13:39:55 crc kubenswrapper[4955]: I0202 13:39:55.716773 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:39:55 crc kubenswrapper[4955]: E0202 13:39:55.717481 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:39:58 crc kubenswrapper[4955]: I0202 13:39:58.173978 4955 generic.go:334] "Generic (PLEG): container finished" podID="254c47a9-edbb-46bb-8c4a-72395aa8f8b0" containerID="1e38efef6851786dd9c57ed2c0f67ed7d0f5a7cf9c12c0450f76038b3e1848e9" exitCode=0 Feb 02 13:39:58 crc kubenswrapper[4955]: I0202 13:39:58.174091 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" event={"ID":"254c47a9-edbb-46bb-8c4a-72395aa8f8b0","Type":"ContainerDied","Data":"1e38efef6851786dd9c57ed2c0f67ed7d0f5a7cf9c12c0450f76038b3e1848e9"} Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.556231 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.692809 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-migration-ssh-key-1\") pod \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.692945 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-extra-config-0\") pod \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.693704 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwxn2\" (UniqueName: \"kubernetes.io/projected/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-kube-api-access-gwxn2\") pod \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.693778 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-cell1-compute-config-1\") pod \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.693819 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-migration-ssh-key-0\") pod \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.693872 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-inventory\") pod \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.693894 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-ssh-key-openstack-edpm-ipam\") pod \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.694359 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-combined-ca-bundle\") pod \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.694420 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-cell1-compute-config-0\") pod \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\" (UID: \"254c47a9-edbb-46bb-8c4a-72395aa8f8b0\") " Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.698792 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "254c47a9-edbb-46bb-8c4a-72395aa8f8b0" (UID: "254c47a9-edbb-46bb-8c4a-72395aa8f8b0"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.698720 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-kube-api-access-gwxn2" (OuterVolumeSpecName: "kube-api-access-gwxn2") pod "254c47a9-edbb-46bb-8c4a-72395aa8f8b0" (UID: "254c47a9-edbb-46bb-8c4a-72395aa8f8b0"). InnerVolumeSpecName "kube-api-access-gwxn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.721035 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "254c47a9-edbb-46bb-8c4a-72395aa8f8b0" (UID: "254c47a9-edbb-46bb-8c4a-72395aa8f8b0"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.726500 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "254c47a9-edbb-46bb-8c4a-72395aa8f8b0" (UID: "254c47a9-edbb-46bb-8c4a-72395aa8f8b0"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.728048 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "254c47a9-edbb-46bb-8c4a-72395aa8f8b0" (UID: "254c47a9-edbb-46bb-8c4a-72395aa8f8b0"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.728393 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-inventory" (OuterVolumeSpecName: "inventory") pod "254c47a9-edbb-46bb-8c4a-72395aa8f8b0" (UID: "254c47a9-edbb-46bb-8c4a-72395aa8f8b0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.729424 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "254c47a9-edbb-46bb-8c4a-72395aa8f8b0" (UID: "254c47a9-edbb-46bb-8c4a-72395aa8f8b0"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.731965 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "254c47a9-edbb-46bb-8c4a-72395aa8f8b0" (UID: "254c47a9-edbb-46bb-8c4a-72395aa8f8b0"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.747876 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "254c47a9-edbb-46bb-8c4a-72395aa8f8b0" (UID: "254c47a9-edbb-46bb-8c4a-72395aa8f8b0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.798430 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.798459 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.798469 4955 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.798478 4955 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.798487 4955 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.798495 4955 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.798505 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwxn2\" (UniqueName: \"kubernetes.io/projected/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-kube-api-access-gwxn2\") on node \"crc\" DevicePath \"\"" Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.798516 4955 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 02 13:39:59 crc kubenswrapper[4955]: I0202 13:39:59.798526 4955 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/254c47a9-edbb-46bb-8c4a-72395aa8f8b0-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.194340 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" event={"ID":"254c47a9-edbb-46bb-8c4a-72395aa8f8b0","Type":"ContainerDied","Data":"604242c3d8f98387c379696a7c8311c10dd29cfac2902c8cc7873a952fd7304f"} Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.194818 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="604242c3d8f98387c379696a7c8311c10dd29cfac2902c8cc7873a952fd7304f" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.194414 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjd98" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.331809 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv"] Feb 02 13:40:00 crc kubenswrapper[4955]: E0202 13:40:00.332329 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="254c47a9-edbb-46bb-8c4a-72395aa8f8b0" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.332352 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="254c47a9-edbb-46bb-8c4a-72395aa8f8b0" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 02 13:40:00 crc kubenswrapper[4955]: E0202 13:40:00.332372 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="452e60e3-6e31-4a43-9b1f-a79af3c33099" containerName="extract-content" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.332380 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="452e60e3-6e31-4a43-9b1f-a79af3c33099" containerName="extract-content" Feb 02 13:40:00 crc kubenswrapper[4955]: E0202 13:40:00.332392 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="452e60e3-6e31-4a43-9b1f-a79af3c33099" containerName="registry-server" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.332400 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="452e60e3-6e31-4a43-9b1f-a79af3c33099" containerName="registry-server" Feb 02 13:40:00 crc kubenswrapper[4955]: E0202 13:40:00.332413 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57aa5139-68f0-4aee-954a-eafc5eb4d136" containerName="extract-content" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.332420 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="57aa5139-68f0-4aee-954a-eafc5eb4d136" containerName="extract-content" Feb 02 13:40:00 crc kubenswrapper[4955]: E0202 13:40:00.332447 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57aa5139-68f0-4aee-954a-eafc5eb4d136" containerName="extract-utilities" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.332456 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="57aa5139-68f0-4aee-954a-eafc5eb4d136" containerName="extract-utilities" Feb 02 13:40:00 crc kubenswrapper[4955]: E0202 13:40:00.332466 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="452e60e3-6e31-4a43-9b1f-a79af3c33099" containerName="extract-utilities" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.332474 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="452e60e3-6e31-4a43-9b1f-a79af3c33099" containerName="extract-utilities" Feb 02 13:40:00 crc kubenswrapper[4955]: E0202 13:40:00.332493 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57aa5139-68f0-4aee-954a-eafc5eb4d136" containerName="registry-server" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.332500 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="57aa5139-68f0-4aee-954a-eafc5eb4d136" containerName="registry-server" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.332696 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="452e60e3-6e31-4a43-9b1f-a79af3c33099" containerName="registry-server" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.332717 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="57aa5139-68f0-4aee-954a-eafc5eb4d136" containerName="registry-server" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.332729 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="254c47a9-edbb-46bb-8c4a-72395aa8f8b0" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.333356 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.335615 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.335785 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.336028 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-65wvh" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.336591 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.337488 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.347802 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv"] Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.409654 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.409729 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.409762 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4srj\" (UniqueName: \"kubernetes.io/projected/95a8c6f1-eab8-467a-9f76-5827e0c35a83-kube-api-access-l4srj\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.409926 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.409970 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.410251 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.410317 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.511865 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.512076 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.512117 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.512173 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.512212 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4srj\" (UniqueName: \"kubernetes.io/projected/95a8c6f1-eab8-467a-9f76-5827e0c35a83-kube-api-access-l4srj\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.512297 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.512323 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.516484 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.516902 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.516936 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.517345 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.518935 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.519231 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.528249 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4srj\" (UniqueName: \"kubernetes.io/projected/95a8c6f1-eab8-467a-9f76-5827e0c35a83-kube-api-access-l4srj\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:40:00 crc kubenswrapper[4955]: I0202 13:40:00.651297 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:40:01 crc kubenswrapper[4955]: I0202 13:40:01.150195 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv"] Feb 02 13:40:01 crc kubenswrapper[4955]: I0202 13:40:01.202664 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" event={"ID":"95a8c6f1-eab8-467a-9f76-5827e0c35a83","Type":"ContainerStarted","Data":"095df89c791bf32679841c7b4b6d76ea9b43008f4568f5492fa0a6c1becb80b0"} Feb 02 13:40:02 crc kubenswrapper[4955]: I0202 13:40:02.212993 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" event={"ID":"95a8c6f1-eab8-467a-9f76-5827e0c35a83","Type":"ContainerStarted","Data":"021b3f41355d113e4fbf4ba844eba38362cea96d327f47f995c6bb8cb545985e"} Feb 02 13:40:02 crc kubenswrapper[4955]: I0202 13:40:02.234796 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" podStartSLOduration=1.7872691189999999 podStartE2EDuration="2.23477024s" podCreationTimestamp="2026-02-02 13:40:00 +0000 UTC" firstStartedPulling="2026-02-02 13:40:01.156399577 +0000 UTC m=+2252.068736027" lastFinishedPulling="2026-02-02 13:40:01.603900698 +0000 UTC m=+2252.516237148" observedRunningTime="2026-02-02 13:40:02.231117711 +0000 UTC m=+2253.143454161" watchObservedRunningTime="2026-02-02 13:40:02.23477024 +0000 UTC m=+2253.147106690" Feb 02 13:40:06 crc kubenswrapper[4955]: I0202 13:40:06.716997 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:40:06 crc kubenswrapper[4955]: E0202 13:40:06.717633 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:40:18 crc kubenswrapper[4955]: I0202 13:40:18.717099 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:40:18 crc kubenswrapper[4955]: E0202 13:40:18.717830 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:40:31 crc kubenswrapper[4955]: I0202 13:40:31.733460 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:40:31 crc kubenswrapper[4955]: E0202 13:40:31.735009 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:40:44 crc kubenswrapper[4955]: I0202 13:40:44.073156 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z9zsx"] Feb 02 13:40:44 crc kubenswrapper[4955]: I0202 13:40:44.076148 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9zsx" Feb 02 13:40:44 crc kubenswrapper[4955]: I0202 13:40:44.089023 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9zsx"] Feb 02 13:40:44 crc kubenswrapper[4955]: I0202 13:40:44.168143 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0310d4dd-712b-4324-adcb-26214ac382ca-catalog-content\") pod \"redhat-marketplace-z9zsx\" (UID: \"0310d4dd-712b-4324-adcb-26214ac382ca\") " pod="openshift-marketplace/redhat-marketplace-z9zsx" Feb 02 13:40:44 crc kubenswrapper[4955]: I0202 13:40:44.168529 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0310d4dd-712b-4324-adcb-26214ac382ca-utilities\") pod \"redhat-marketplace-z9zsx\" (UID: \"0310d4dd-712b-4324-adcb-26214ac382ca\") " pod="openshift-marketplace/redhat-marketplace-z9zsx" Feb 02 13:40:44 crc kubenswrapper[4955]: I0202 13:40:44.168604 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvmv5\" (UniqueName: \"kubernetes.io/projected/0310d4dd-712b-4324-adcb-26214ac382ca-kube-api-access-gvmv5\") pod \"redhat-marketplace-z9zsx\" (UID: \"0310d4dd-712b-4324-adcb-26214ac382ca\") " pod="openshift-marketplace/redhat-marketplace-z9zsx" Feb 02 13:40:44 crc kubenswrapper[4955]: I0202 13:40:44.270219 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0310d4dd-712b-4324-adcb-26214ac382ca-catalog-content\") pod \"redhat-marketplace-z9zsx\" (UID: \"0310d4dd-712b-4324-adcb-26214ac382ca\") " pod="openshift-marketplace/redhat-marketplace-z9zsx" Feb 02 13:40:44 crc kubenswrapper[4955]: I0202 13:40:44.270411 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0310d4dd-712b-4324-adcb-26214ac382ca-utilities\") pod \"redhat-marketplace-z9zsx\" (UID: \"0310d4dd-712b-4324-adcb-26214ac382ca\") " pod="openshift-marketplace/redhat-marketplace-z9zsx" Feb 02 13:40:44 crc kubenswrapper[4955]: I0202 13:40:44.270450 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvmv5\" (UniqueName: \"kubernetes.io/projected/0310d4dd-712b-4324-adcb-26214ac382ca-kube-api-access-gvmv5\") pod \"redhat-marketplace-z9zsx\" (UID: \"0310d4dd-712b-4324-adcb-26214ac382ca\") " pod="openshift-marketplace/redhat-marketplace-z9zsx" Feb 02 13:40:44 crc kubenswrapper[4955]: I0202 13:40:44.270834 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0310d4dd-712b-4324-adcb-26214ac382ca-catalog-content\") pod \"redhat-marketplace-z9zsx\" (UID: \"0310d4dd-712b-4324-adcb-26214ac382ca\") " pod="openshift-marketplace/redhat-marketplace-z9zsx" Feb 02 13:40:44 crc kubenswrapper[4955]: I0202 13:40:44.270933 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0310d4dd-712b-4324-adcb-26214ac382ca-utilities\") pod \"redhat-marketplace-z9zsx\" (UID: \"0310d4dd-712b-4324-adcb-26214ac382ca\") " pod="openshift-marketplace/redhat-marketplace-z9zsx" Feb 02 13:40:44 crc kubenswrapper[4955]: I0202 13:40:44.293729 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvmv5\" (UniqueName: \"kubernetes.io/projected/0310d4dd-712b-4324-adcb-26214ac382ca-kube-api-access-gvmv5\") pod \"redhat-marketplace-z9zsx\" (UID: \"0310d4dd-712b-4324-adcb-26214ac382ca\") " pod="openshift-marketplace/redhat-marketplace-z9zsx" Feb 02 13:40:44 crc kubenswrapper[4955]: I0202 13:40:44.399150 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9zsx" Feb 02 13:40:44 crc kubenswrapper[4955]: I0202 13:40:44.716462 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:40:44 crc kubenswrapper[4955]: E0202 13:40:44.717072 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:40:44 crc kubenswrapper[4955]: I0202 13:40:44.908724 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9zsx"] Feb 02 13:40:45 crc kubenswrapper[4955]: I0202 13:40:45.559685 4955 generic.go:334] "Generic (PLEG): container finished" podID="0310d4dd-712b-4324-adcb-26214ac382ca" containerID="8cfc0eadd2301d3d7fc5ecc54201d4ebd93950388b4ebf293c56d6d5266e5a5f" exitCode=0 Feb 02 13:40:45 crc kubenswrapper[4955]: I0202 13:40:45.559756 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9zsx" event={"ID":"0310d4dd-712b-4324-adcb-26214ac382ca","Type":"ContainerDied","Data":"8cfc0eadd2301d3d7fc5ecc54201d4ebd93950388b4ebf293c56d6d5266e5a5f"} Feb 02 13:40:45 crc kubenswrapper[4955]: I0202 13:40:45.559995 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9zsx" event={"ID":"0310d4dd-712b-4324-adcb-26214ac382ca","Type":"ContainerStarted","Data":"1ab9fc99d79fd36c4fad1b81cbff07d40aedef79ef7b3d8df84a2951b5756d6f"} Feb 02 13:40:46 crc kubenswrapper[4955]: I0202 13:40:46.572965 4955 generic.go:334] "Generic (PLEG): container finished" podID="0310d4dd-712b-4324-adcb-26214ac382ca" containerID="dc19f08db635517744b91f264866f30b4a5266b62f966b7335c4cd5f79efc62b" exitCode=0 Feb 02 13:40:46 crc kubenswrapper[4955]: I0202 13:40:46.573087 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9zsx" event={"ID":"0310d4dd-712b-4324-adcb-26214ac382ca","Type":"ContainerDied","Data":"dc19f08db635517744b91f264866f30b4a5266b62f966b7335c4cd5f79efc62b"} Feb 02 13:40:47 crc kubenswrapper[4955]: I0202 13:40:47.598219 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9zsx" event={"ID":"0310d4dd-712b-4324-adcb-26214ac382ca","Type":"ContainerStarted","Data":"968461cc675d7d43161c5fb8323588bffd9759365f2eb0d048c58141d0c067cf"} Feb 02 13:40:47 crc kubenswrapper[4955]: I0202 13:40:47.626133 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z9zsx" podStartSLOduration=2.213418738 podStartE2EDuration="3.626113272s" podCreationTimestamp="2026-02-02 13:40:44 +0000 UTC" firstStartedPulling="2026-02-02 13:40:45.562090042 +0000 UTC m=+2296.474426492" lastFinishedPulling="2026-02-02 13:40:46.974784576 +0000 UTC m=+2297.887121026" observedRunningTime="2026-02-02 13:40:47.617292588 +0000 UTC m=+2298.529629048" watchObservedRunningTime="2026-02-02 13:40:47.626113272 +0000 UTC m=+2298.538449722" Feb 02 13:40:54 crc kubenswrapper[4955]: I0202 13:40:54.399432 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z9zsx" Feb 02 13:40:54 crc kubenswrapper[4955]: I0202 13:40:54.401588 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z9zsx" Feb 02 13:40:54 crc kubenswrapper[4955]: I0202 13:40:54.448075 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z9zsx" Feb 02 13:40:54 crc kubenswrapper[4955]: I0202 13:40:54.697466 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z9zsx" Feb 02 13:40:54 crc kubenswrapper[4955]: I0202 13:40:54.747448 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9zsx"] Feb 02 13:40:56 crc kubenswrapper[4955]: I0202 13:40:56.669181 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z9zsx" podUID="0310d4dd-712b-4324-adcb-26214ac382ca" containerName="registry-server" containerID="cri-o://968461cc675d7d43161c5fb8323588bffd9759365f2eb0d048c58141d0c067cf" gracePeriod=2 Feb 02 13:40:56 crc kubenswrapper[4955]: I0202 13:40:56.716581 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:40:56 crc kubenswrapper[4955]: E0202 13:40:56.717131 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.132783 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9zsx" Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.229636 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvmv5\" (UniqueName: \"kubernetes.io/projected/0310d4dd-712b-4324-adcb-26214ac382ca-kube-api-access-gvmv5\") pod \"0310d4dd-712b-4324-adcb-26214ac382ca\" (UID: \"0310d4dd-712b-4324-adcb-26214ac382ca\") " Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.229885 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0310d4dd-712b-4324-adcb-26214ac382ca-utilities\") pod \"0310d4dd-712b-4324-adcb-26214ac382ca\" (UID: \"0310d4dd-712b-4324-adcb-26214ac382ca\") " Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.229921 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0310d4dd-712b-4324-adcb-26214ac382ca-catalog-content\") pod \"0310d4dd-712b-4324-adcb-26214ac382ca\" (UID: \"0310d4dd-712b-4324-adcb-26214ac382ca\") " Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.230622 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0310d4dd-712b-4324-adcb-26214ac382ca-utilities" (OuterVolumeSpecName: "utilities") pod "0310d4dd-712b-4324-adcb-26214ac382ca" (UID: "0310d4dd-712b-4324-adcb-26214ac382ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.235661 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0310d4dd-712b-4324-adcb-26214ac382ca-kube-api-access-gvmv5" (OuterVolumeSpecName: "kube-api-access-gvmv5") pod "0310d4dd-712b-4324-adcb-26214ac382ca" (UID: "0310d4dd-712b-4324-adcb-26214ac382ca"). InnerVolumeSpecName "kube-api-access-gvmv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.253958 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0310d4dd-712b-4324-adcb-26214ac382ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0310d4dd-712b-4324-adcb-26214ac382ca" (UID: "0310d4dd-712b-4324-adcb-26214ac382ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.332258 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvmv5\" (UniqueName: \"kubernetes.io/projected/0310d4dd-712b-4324-adcb-26214ac382ca-kube-api-access-gvmv5\") on node \"crc\" DevicePath \"\"" Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.332300 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0310d4dd-712b-4324-adcb-26214ac382ca-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.332313 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0310d4dd-712b-4324-adcb-26214ac382ca-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.690025 4955 generic.go:334] "Generic (PLEG): container finished" podID="0310d4dd-712b-4324-adcb-26214ac382ca" containerID="968461cc675d7d43161c5fb8323588bffd9759365f2eb0d048c58141d0c067cf" exitCode=0 Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.690292 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9zsx" event={"ID":"0310d4dd-712b-4324-adcb-26214ac382ca","Type":"ContainerDied","Data":"968461cc675d7d43161c5fb8323588bffd9759365f2eb0d048c58141d0c067cf"} Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.690322 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9zsx" event={"ID":"0310d4dd-712b-4324-adcb-26214ac382ca","Type":"ContainerDied","Data":"1ab9fc99d79fd36c4fad1b81cbff07d40aedef79ef7b3d8df84a2951b5756d6f"} Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.690341 4955 scope.go:117] "RemoveContainer" containerID="968461cc675d7d43161c5fb8323588bffd9759365f2eb0d048c58141d0c067cf" Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.690533 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9zsx" Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.718928 4955 scope.go:117] "RemoveContainer" containerID="dc19f08db635517744b91f264866f30b4a5266b62f966b7335c4cd5f79efc62b" Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.732420 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9zsx"] Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.749171 4955 scope.go:117] "RemoveContainer" containerID="8cfc0eadd2301d3d7fc5ecc54201d4ebd93950388b4ebf293c56d6d5266e5a5f" Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.768023 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9zsx"] Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.809070 4955 scope.go:117] "RemoveContainer" containerID="968461cc675d7d43161c5fb8323588bffd9759365f2eb0d048c58141d0c067cf" Feb 02 13:40:57 crc kubenswrapper[4955]: E0202 13:40:57.809763 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"968461cc675d7d43161c5fb8323588bffd9759365f2eb0d048c58141d0c067cf\": container with ID starting with 968461cc675d7d43161c5fb8323588bffd9759365f2eb0d048c58141d0c067cf not found: ID does not exist" containerID="968461cc675d7d43161c5fb8323588bffd9759365f2eb0d048c58141d0c067cf" Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.809813 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"968461cc675d7d43161c5fb8323588bffd9759365f2eb0d048c58141d0c067cf"} err="failed to get container status \"968461cc675d7d43161c5fb8323588bffd9759365f2eb0d048c58141d0c067cf\": rpc error: code = NotFound desc = could not find container \"968461cc675d7d43161c5fb8323588bffd9759365f2eb0d048c58141d0c067cf\": container with ID starting with 968461cc675d7d43161c5fb8323588bffd9759365f2eb0d048c58141d0c067cf not found: ID does not exist" Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.809843 4955 scope.go:117] "RemoveContainer" containerID="dc19f08db635517744b91f264866f30b4a5266b62f966b7335c4cd5f79efc62b" Feb 02 13:40:57 crc kubenswrapper[4955]: E0202 13:40:57.810339 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc19f08db635517744b91f264866f30b4a5266b62f966b7335c4cd5f79efc62b\": container with ID starting with dc19f08db635517744b91f264866f30b4a5266b62f966b7335c4cd5f79efc62b not found: ID does not exist" containerID="dc19f08db635517744b91f264866f30b4a5266b62f966b7335c4cd5f79efc62b" Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.810382 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc19f08db635517744b91f264866f30b4a5266b62f966b7335c4cd5f79efc62b"} err="failed to get container status \"dc19f08db635517744b91f264866f30b4a5266b62f966b7335c4cd5f79efc62b\": rpc error: code = NotFound desc = could not find container \"dc19f08db635517744b91f264866f30b4a5266b62f966b7335c4cd5f79efc62b\": container with ID starting with dc19f08db635517744b91f264866f30b4a5266b62f966b7335c4cd5f79efc62b not found: ID does not exist" Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.810405 4955 scope.go:117] "RemoveContainer" containerID="8cfc0eadd2301d3d7fc5ecc54201d4ebd93950388b4ebf293c56d6d5266e5a5f" Feb 02 13:40:57 crc kubenswrapper[4955]: E0202 13:40:57.810902 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cfc0eadd2301d3d7fc5ecc54201d4ebd93950388b4ebf293c56d6d5266e5a5f\": container with ID starting with 8cfc0eadd2301d3d7fc5ecc54201d4ebd93950388b4ebf293c56d6d5266e5a5f not found: ID does not exist" containerID="8cfc0eadd2301d3d7fc5ecc54201d4ebd93950388b4ebf293c56d6d5266e5a5f" Feb 02 13:40:57 crc kubenswrapper[4955]: I0202 13:40:57.810926 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cfc0eadd2301d3d7fc5ecc54201d4ebd93950388b4ebf293c56d6d5266e5a5f"} err="failed to get container status \"8cfc0eadd2301d3d7fc5ecc54201d4ebd93950388b4ebf293c56d6d5266e5a5f\": rpc error: code = NotFound desc = could not find container \"8cfc0eadd2301d3d7fc5ecc54201d4ebd93950388b4ebf293c56d6d5266e5a5f\": container with ID starting with 8cfc0eadd2301d3d7fc5ecc54201d4ebd93950388b4ebf293c56d6d5266e5a5f not found: ID does not exist" Feb 02 13:40:59 crc kubenswrapper[4955]: I0202 13:40:59.726965 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0310d4dd-712b-4324-adcb-26214ac382ca" path="/var/lib/kubelet/pods/0310d4dd-712b-4324-adcb-26214ac382ca/volumes" Feb 02 13:41:11 crc kubenswrapper[4955]: I0202 13:41:11.716359 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:41:11 crc kubenswrapper[4955]: E0202 13:41:11.717421 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:41:23 crc kubenswrapper[4955]: I0202 13:41:23.717173 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:41:23 crc kubenswrapper[4955]: E0202 13:41:23.718023 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:41:34 crc kubenswrapper[4955]: I0202 13:41:34.717239 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:41:34 crc kubenswrapper[4955]: E0202 13:41:34.717977 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:41:47 crc kubenswrapper[4955]: I0202 13:41:47.716536 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:41:47 crc kubenswrapper[4955]: E0202 13:41:47.717310 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:42:01 crc kubenswrapper[4955]: I0202 13:42:01.716299 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:42:01 crc kubenswrapper[4955]: E0202 13:42:01.717169 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:42:12 crc kubenswrapper[4955]: I0202 13:42:12.332225 4955 generic.go:334] "Generic (PLEG): container finished" podID="95a8c6f1-eab8-467a-9f76-5827e0c35a83" containerID="021b3f41355d113e4fbf4ba844eba38362cea96d327f47f995c6bb8cb545985e" exitCode=0 Feb 02 13:42:12 crc kubenswrapper[4955]: I0202 13:42:12.332263 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" event={"ID":"95a8c6f1-eab8-467a-9f76-5827e0c35a83","Type":"ContainerDied","Data":"021b3f41355d113e4fbf4ba844eba38362cea96d327f47f995c6bb8cb545985e"} Feb 02 13:42:13 crc kubenswrapper[4955]: I0202 13:42:13.750872 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:42:13 crc kubenswrapper[4955]: I0202 13:42:13.875572 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ceilometer-compute-config-data-0\") pod \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " Feb 02 13:42:13 crc kubenswrapper[4955]: I0202 13:42:13.875731 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-telemetry-combined-ca-bundle\") pod \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " Feb 02 13:42:13 crc kubenswrapper[4955]: I0202 13:42:13.875830 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4srj\" (UniqueName: \"kubernetes.io/projected/95a8c6f1-eab8-467a-9f76-5827e0c35a83-kube-api-access-l4srj\") pod \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " Feb 02 13:42:13 crc kubenswrapper[4955]: I0202 13:42:13.875864 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-inventory\") pod \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " Feb 02 13:42:13 crc kubenswrapper[4955]: I0202 13:42:13.875908 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ssh-key-openstack-edpm-ipam\") pod \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " Feb 02 13:42:13 crc kubenswrapper[4955]: I0202 13:42:13.875951 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ceilometer-compute-config-data-1\") pod \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " Feb 02 13:42:13 crc kubenswrapper[4955]: I0202 13:42:13.875998 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ceilometer-compute-config-data-2\") pod \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\" (UID: \"95a8c6f1-eab8-467a-9f76-5827e0c35a83\") " Feb 02 13:42:13 crc kubenswrapper[4955]: I0202 13:42:13.881623 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "95a8c6f1-eab8-467a-9f76-5827e0c35a83" (UID: "95a8c6f1-eab8-467a-9f76-5827e0c35a83"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:42:13 crc kubenswrapper[4955]: I0202 13:42:13.881775 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95a8c6f1-eab8-467a-9f76-5827e0c35a83-kube-api-access-l4srj" (OuterVolumeSpecName: "kube-api-access-l4srj") pod "95a8c6f1-eab8-467a-9f76-5827e0c35a83" (UID: "95a8c6f1-eab8-467a-9f76-5827e0c35a83"). InnerVolumeSpecName "kube-api-access-l4srj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:42:13 crc kubenswrapper[4955]: I0202 13:42:13.904122 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "95a8c6f1-eab8-467a-9f76-5827e0c35a83" (UID: "95a8c6f1-eab8-467a-9f76-5827e0c35a83"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:42:13 crc kubenswrapper[4955]: I0202 13:42:13.905134 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "95a8c6f1-eab8-467a-9f76-5827e0c35a83" (UID: "95a8c6f1-eab8-467a-9f76-5827e0c35a83"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:42:13 crc kubenswrapper[4955]: I0202 13:42:13.906821 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "95a8c6f1-eab8-467a-9f76-5827e0c35a83" (UID: "95a8c6f1-eab8-467a-9f76-5827e0c35a83"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:42:13 crc kubenswrapper[4955]: I0202 13:42:13.914292 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-inventory" (OuterVolumeSpecName: "inventory") pod "95a8c6f1-eab8-467a-9f76-5827e0c35a83" (UID: "95a8c6f1-eab8-467a-9f76-5827e0c35a83"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:42:13 crc kubenswrapper[4955]: I0202 13:42:13.914859 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "95a8c6f1-eab8-467a-9f76-5827e0c35a83" (UID: "95a8c6f1-eab8-467a-9f76-5827e0c35a83"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:42:13 crc kubenswrapper[4955]: I0202 13:42:13.979172 4955 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:42:13 crc kubenswrapper[4955]: I0202 13:42:13.979653 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4srj\" (UniqueName: \"kubernetes.io/projected/95a8c6f1-eab8-467a-9f76-5827e0c35a83-kube-api-access-l4srj\") on node \"crc\" DevicePath \"\"" Feb 02 13:42:13 crc kubenswrapper[4955]: I0202 13:42:13.979762 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 13:42:13 crc kubenswrapper[4955]: I0202 13:42:13.979820 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 13:42:13 crc kubenswrapper[4955]: I0202 13:42:13.979912 4955 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 02 13:42:13 crc kubenswrapper[4955]: I0202 13:42:13.980007 4955 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 02 13:42:13 crc kubenswrapper[4955]: I0202 13:42:13.980086 4955 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/95a8c6f1-eab8-467a-9f76-5827e0c35a83-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:42:14 crc kubenswrapper[4955]: I0202 13:42:14.352897 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" event={"ID":"95a8c6f1-eab8-467a-9f76-5827e0c35a83","Type":"ContainerDied","Data":"095df89c791bf32679841c7b4b6d76ea9b43008f4568f5492fa0a6c1becb80b0"} Feb 02 13:42:14 crc kubenswrapper[4955]: I0202 13:42:14.352945 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="095df89c791bf32679841c7b4b6d76ea9b43008f4568f5492fa0a6c1becb80b0" Feb 02 13:42:14 crc kubenswrapper[4955]: I0202 13:42:14.352962 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv" Feb 02 13:42:15 crc kubenswrapper[4955]: I0202 13:42:15.720754 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:42:15 crc kubenswrapper[4955]: E0202 13:42:15.721106 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:42:27 crc kubenswrapper[4955]: I0202 13:42:27.717724 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:42:27 crc kubenswrapper[4955]: E0202 13:42:27.718473 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:42:39 crc kubenswrapper[4955]: I0202 13:42:39.724096 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:42:39 crc kubenswrapper[4955]: E0202 13:42:39.725022 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:42:51 crc kubenswrapper[4955]: I0202 13:42:51.716028 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:42:51 crc kubenswrapper[4955]: E0202 13:42:51.716720 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:43:03 crc kubenswrapper[4955]: I0202 13:43:03.716175 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:43:03 crc kubenswrapper[4955]: E0202 13:43:03.716949 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:43:18 crc kubenswrapper[4955]: I0202 13:43:18.716390 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:43:18 crc kubenswrapper[4955]: E0202 13:43:18.717114 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:43:33 crc kubenswrapper[4955]: I0202 13:43:33.717295 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:43:33 crc kubenswrapper[4955]: E0202 13:43:33.718028 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:43:46 crc kubenswrapper[4955]: I0202 13:43:46.716087 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:43:46 crc kubenswrapper[4955]: E0202 13:43:46.716928 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:43:59 crc kubenswrapper[4955]: I0202 13:43:59.722937 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:43:59 crc kubenswrapper[4955]: E0202 13:43:59.725085 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:44:12 crc kubenswrapper[4955]: I0202 13:44:12.717022 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:44:13 crc kubenswrapper[4955]: I0202 13:44:13.598456 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerStarted","Data":"8a69cc62a3e0e4f04e1954470b35a9d41be0703c2cd0b2c63c5801f304b94da6"} Feb 02 13:44:58 crc kubenswrapper[4955]: I0202 13:44:58.832081 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx_0e5f1bee-07dd-4eaf-9a3b-328845abb141/manager/0.log" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.151028 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500665-t8zxd"] Feb 02 13:45:00 crc kubenswrapper[4955]: E0202 13:45:00.151796 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a8c6f1-eab8-467a-9f76-5827e0c35a83" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.151818 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a8c6f1-eab8-467a-9f76-5827e0c35a83" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 02 13:45:00 crc kubenswrapper[4955]: E0202 13:45:00.151847 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0310d4dd-712b-4324-adcb-26214ac382ca" containerName="registry-server" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.151857 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="0310d4dd-712b-4324-adcb-26214ac382ca" containerName="registry-server" Feb 02 13:45:00 crc kubenswrapper[4955]: E0202 13:45:00.151871 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0310d4dd-712b-4324-adcb-26214ac382ca" containerName="extract-content" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.151877 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="0310d4dd-712b-4324-adcb-26214ac382ca" containerName="extract-content" Feb 02 13:45:00 crc kubenswrapper[4955]: E0202 13:45:00.151905 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0310d4dd-712b-4324-adcb-26214ac382ca" containerName="extract-utilities" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.151914 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="0310d4dd-712b-4324-adcb-26214ac382ca" containerName="extract-utilities" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.152136 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="0310d4dd-712b-4324-adcb-26214ac382ca" containerName="registry-server" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.152148 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="95a8c6f1-eab8-467a-9f76-5827e0c35a83" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.152879 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-t8zxd" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.155007 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.156682 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.170221 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500665-t8zxd"] Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.176824 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15342842-84ab-4afe-b6ad-4999f805c766-config-volume\") pod \"collect-profiles-29500665-t8zxd\" (UID: \"15342842-84ab-4afe-b6ad-4999f805c766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-t8zxd" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.176943 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5q8p\" (UniqueName: \"kubernetes.io/projected/15342842-84ab-4afe-b6ad-4999f805c766-kube-api-access-z5q8p\") pod \"collect-profiles-29500665-t8zxd\" (UID: \"15342842-84ab-4afe-b6ad-4999f805c766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-t8zxd" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.177144 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15342842-84ab-4afe-b6ad-4999f805c766-secret-volume\") pod \"collect-profiles-29500665-t8zxd\" (UID: \"15342842-84ab-4afe-b6ad-4999f805c766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-t8zxd" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.278290 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15342842-84ab-4afe-b6ad-4999f805c766-secret-volume\") pod \"collect-profiles-29500665-t8zxd\" (UID: \"15342842-84ab-4afe-b6ad-4999f805c766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-t8zxd" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.278372 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15342842-84ab-4afe-b6ad-4999f805c766-config-volume\") pod \"collect-profiles-29500665-t8zxd\" (UID: \"15342842-84ab-4afe-b6ad-4999f805c766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-t8zxd" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.278440 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5q8p\" (UniqueName: \"kubernetes.io/projected/15342842-84ab-4afe-b6ad-4999f805c766-kube-api-access-z5q8p\") pod \"collect-profiles-29500665-t8zxd\" (UID: \"15342842-84ab-4afe-b6ad-4999f805c766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-t8zxd" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.279432 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15342842-84ab-4afe-b6ad-4999f805c766-config-volume\") pod \"collect-profiles-29500665-t8zxd\" (UID: \"15342842-84ab-4afe-b6ad-4999f805c766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-t8zxd" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.284451 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15342842-84ab-4afe-b6ad-4999f805c766-secret-volume\") pod \"collect-profiles-29500665-t8zxd\" (UID: \"15342842-84ab-4afe-b6ad-4999f805c766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-t8zxd" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.296379 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5q8p\" (UniqueName: \"kubernetes.io/projected/15342842-84ab-4afe-b6ad-4999f805c766-kube-api-access-z5q8p\") pod \"collect-profiles-29500665-t8zxd\" (UID: \"15342842-84ab-4afe-b6ad-4999f805c766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-t8zxd" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.476446 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-t8zxd" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.703832 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.704434 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="19fa3e77-422a-425a-8e53-cefd5d880462" containerName="openstackclient" containerID="cri-o://6e7845e3e261d8aa807e1320bcf6b4a6f5baad4a239e3d74acac3a47b84e30a1" gracePeriod=2 Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.732045 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.756867 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 02 13:45:00 crc kubenswrapper[4955]: E0202 13:45:00.757340 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19fa3e77-422a-425a-8e53-cefd5d880462" containerName="openstackclient" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.757361 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="19fa3e77-422a-425a-8e53-cefd5d880462" containerName="openstackclient" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.757616 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="19fa3e77-422a-425a-8e53-cefd5d880462" containerName="openstackclient" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.758287 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.767442 4955 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="19fa3e77-422a-425a-8e53-cefd5d880462" podUID="82e2ca6a-27a0-4777-92e8-eb25aa3e2b73" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.772141 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.772648 4955 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="60285b06-983a-4344-959e-ed58414afdd9" podUID="82e2ca6a-27a0-4777-92e8-eb25aa3e2b73" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.788225 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 02 13:45:00 crc kubenswrapper[4955]: E0202 13:45:00.789066 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-l4l6z openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[combined-ca-bundle kube-api-access-l4l6z openstack-config openstack-config-secret]: context canceled" pod="openstack/openstackclient" podUID="60285b06-983a-4344-959e-ed58414afdd9" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.796037 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.796586 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4l6z\" (UniqueName: \"kubernetes.io/projected/60285b06-983a-4344-959e-ed58414afdd9-kube-api-access-l4l6z\") pod \"openstackclient\" (UID: \"60285b06-983a-4344-959e-ed58414afdd9\") " pod="openstack/openstackclient" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.796669 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/60285b06-983a-4344-959e-ed58414afdd9-openstack-config\") pod \"openstackclient\" (UID: \"60285b06-983a-4344-959e-ed58414afdd9\") " pod="openstack/openstackclient" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.796703 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60285b06-983a-4344-959e-ed58414afdd9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"60285b06-983a-4344-959e-ed58414afdd9\") " pod="openstack/openstackclient" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.796734 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/60285b06-983a-4344-959e-ed58414afdd9-openstack-config-secret\") pod \"openstackclient\" (UID: \"60285b06-983a-4344-959e-ed58414afdd9\") " pod="openstack/openstackclient" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.819115 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.820403 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.828007 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.898737 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hk9m\" (UniqueName: \"kubernetes.io/projected/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-kube-api-access-6hk9m\") pod \"openstackclient\" (UID: \"82e2ca6a-27a0-4777-92e8-eb25aa3e2b73\") " pod="openstack/openstackclient" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.898810 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-openstack-config-secret\") pod \"openstackclient\" (UID: \"82e2ca6a-27a0-4777-92e8-eb25aa3e2b73\") " pod="openstack/openstackclient" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.898847 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/60285b06-983a-4344-959e-ed58414afdd9-openstack-config\") pod \"openstackclient\" (UID: \"60285b06-983a-4344-959e-ed58414afdd9\") " pod="openstack/openstackclient" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.898874 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-openstack-config\") pod \"openstackclient\" (UID: \"82e2ca6a-27a0-4777-92e8-eb25aa3e2b73\") " pod="openstack/openstackclient" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.898899 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60285b06-983a-4344-959e-ed58414afdd9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"60285b06-983a-4344-959e-ed58414afdd9\") " pod="openstack/openstackclient" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.898919 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/60285b06-983a-4344-959e-ed58414afdd9-openstack-config-secret\") pod \"openstackclient\" (UID: \"60285b06-983a-4344-959e-ed58414afdd9\") " pod="openstack/openstackclient" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.898997 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-combined-ca-bundle\") pod \"openstackclient\" (UID: \"82e2ca6a-27a0-4777-92e8-eb25aa3e2b73\") " pod="openstack/openstackclient" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.899089 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4l6z\" (UniqueName: \"kubernetes.io/projected/60285b06-983a-4344-959e-ed58414afdd9-kube-api-access-l4l6z\") pod \"openstackclient\" (UID: \"60285b06-983a-4344-959e-ed58414afdd9\") " pod="openstack/openstackclient" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.900272 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/60285b06-983a-4344-959e-ed58414afdd9-openstack-config\") pod \"openstackclient\" (UID: \"60285b06-983a-4344-959e-ed58414afdd9\") " pod="openstack/openstackclient" Feb 02 13:45:00 crc kubenswrapper[4955]: E0202 13:45:00.903938 4955 projected.go:194] Error preparing data for projected volume kube-api-access-l4l6z for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (60285b06-983a-4344-959e-ed58414afdd9) does not match the UID in record. The object might have been deleted and then recreated Feb 02 13:45:00 crc kubenswrapper[4955]: E0202 13:45:00.904061 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/60285b06-983a-4344-959e-ed58414afdd9-kube-api-access-l4l6z podName:60285b06-983a-4344-959e-ed58414afdd9 nodeName:}" failed. No retries permitted until 2026-02-02 13:45:01.404040501 +0000 UTC m=+2552.316376951 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l4l6z" (UniqueName: "kubernetes.io/projected/60285b06-983a-4344-959e-ed58414afdd9-kube-api-access-l4l6z") pod "openstackclient" (UID: "60285b06-983a-4344-959e-ed58414afdd9") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (60285b06-983a-4344-959e-ed58414afdd9) does not match the UID in record. The object might have been deleted and then recreated Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.908318 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/60285b06-983a-4344-959e-ed58414afdd9-openstack-config-secret\") pod \"openstackclient\" (UID: \"60285b06-983a-4344-959e-ed58414afdd9\") " pod="openstack/openstackclient" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.920457 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60285b06-983a-4344-959e-ed58414afdd9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"60285b06-983a-4344-959e-ed58414afdd9\") " pod="openstack/openstackclient" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.973768 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500665-t8zxd"] Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.989927 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:45:00 crc kubenswrapper[4955]: I0202 13:45:00.993856 4955 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="60285b06-983a-4344-959e-ed58414afdd9" podUID="82e2ca6a-27a0-4777-92e8-eb25aa3e2b73" Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.000491 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hk9m\" (UniqueName: \"kubernetes.io/projected/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-kube-api-access-6hk9m\") pod \"openstackclient\" (UID: \"82e2ca6a-27a0-4777-92e8-eb25aa3e2b73\") " pod="openstack/openstackclient" Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.000591 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-openstack-config-secret\") pod \"openstackclient\" (UID: \"82e2ca6a-27a0-4777-92e8-eb25aa3e2b73\") " pod="openstack/openstackclient" Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.000642 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-openstack-config\") pod \"openstackclient\" (UID: \"82e2ca6a-27a0-4777-92e8-eb25aa3e2b73\") " pod="openstack/openstackclient" Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.000788 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-combined-ca-bundle\") pod \"openstackclient\" (UID: \"82e2ca6a-27a0-4777-92e8-eb25aa3e2b73\") " pod="openstack/openstackclient" Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.002219 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-openstack-config\") pod \"openstackclient\" (UID: \"82e2ca6a-27a0-4777-92e8-eb25aa3e2b73\") " pod="openstack/openstackclient" Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.003999 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.004644 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-openstack-config-secret\") pod \"openstackclient\" (UID: \"82e2ca6a-27a0-4777-92e8-eb25aa3e2b73\") " pod="openstack/openstackclient" Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.005089 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-combined-ca-bundle\") pod \"openstackclient\" (UID: \"82e2ca6a-27a0-4777-92e8-eb25aa3e2b73\") " pod="openstack/openstackclient" Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.006787 4955 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="60285b06-983a-4344-959e-ed58414afdd9" podUID="82e2ca6a-27a0-4777-92e8-eb25aa3e2b73" Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.018584 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hk9m\" (UniqueName: \"kubernetes.io/projected/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-kube-api-access-6hk9m\") pod \"openstackclient\" (UID: \"82e2ca6a-27a0-4777-92e8-eb25aa3e2b73\") " pod="openstack/openstackclient" Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.102230 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/60285b06-983a-4344-959e-ed58414afdd9-openstack-config-secret\") pod \"60285b06-983a-4344-959e-ed58414afdd9\" (UID: \"60285b06-983a-4344-959e-ed58414afdd9\") " Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.102385 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60285b06-983a-4344-959e-ed58414afdd9-combined-ca-bundle\") pod \"60285b06-983a-4344-959e-ed58414afdd9\" (UID: \"60285b06-983a-4344-959e-ed58414afdd9\") " Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.102618 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/60285b06-983a-4344-959e-ed58414afdd9-openstack-config\") pod \"60285b06-983a-4344-959e-ed58414afdd9\" (UID: \"60285b06-983a-4344-959e-ed58414afdd9\") " Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.103014 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4l6z\" (UniqueName: \"kubernetes.io/projected/60285b06-983a-4344-959e-ed58414afdd9-kube-api-access-l4l6z\") on node \"crc\" DevicePath \"\"" Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.103403 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60285b06-983a-4344-959e-ed58414afdd9-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "60285b06-983a-4344-959e-ed58414afdd9" (UID: "60285b06-983a-4344-959e-ed58414afdd9"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.106520 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60285b06-983a-4344-959e-ed58414afdd9-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "60285b06-983a-4344-959e-ed58414afdd9" (UID: "60285b06-983a-4344-959e-ed58414afdd9"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.108681 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60285b06-983a-4344-959e-ed58414afdd9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60285b06-983a-4344-959e-ed58414afdd9" (UID: "60285b06-983a-4344-959e-ed58414afdd9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.146431 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.204807 4955 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/60285b06-983a-4344-959e-ed58414afdd9-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.204843 4955 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/60285b06-983a-4344-959e-ed58414afdd9-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.204876 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60285b06-983a-4344-959e-ed58414afdd9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.683406 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 13:45:01 crc kubenswrapper[4955]: W0202 13:45:01.691072 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82e2ca6a_27a0_4777_92e8_eb25aa3e2b73.slice/crio-e24b89f1db9beccac629634a19c77304a1b000e399478198e847732d1b4efb23 WatchSource:0}: Error finding container e24b89f1db9beccac629634a19c77304a1b000e399478198e847732d1b4efb23: Status 404 returned error can't find the container with id e24b89f1db9beccac629634a19c77304a1b000e399478198e847732d1b4efb23 Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.731252 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60285b06-983a-4344-959e-ed58414afdd9" path="/var/lib/kubelet/pods/60285b06-983a-4344-959e-ed58414afdd9/volumes" Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.904256 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-chb64"] Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.905726 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-chb64" Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.919523 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-chb64"] Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.925772 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f721ccf9-9b3f-412e-86ec-13501bf80899-operator-scripts\") pod \"aodh-db-create-chb64\" (UID: \"f721ccf9-9b3f-412e-86ec-13501bf80899\") " pod="openstack/aodh-db-create-chb64" Feb 02 13:45:01 crc kubenswrapper[4955]: I0202 13:45:01.926066 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kgnr\" (UniqueName: \"kubernetes.io/projected/f721ccf9-9b3f-412e-86ec-13501bf80899-kube-api-access-9kgnr\") pod \"aodh-db-create-chb64\" (UID: \"f721ccf9-9b3f-412e-86ec-13501bf80899\") " pod="openstack/aodh-db-create-chb64" Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.003378 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"82e2ca6a-27a0-4777-92e8-eb25aa3e2b73","Type":"ContainerStarted","Data":"f42e04e86d9f1fb67d30aa799dfd84cdfcd050682a0b69f6dabaad072ba5f3c2"} Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.003628 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"82e2ca6a-27a0-4777-92e8-eb25aa3e2b73","Type":"ContainerStarted","Data":"e24b89f1db9beccac629634a19c77304a1b000e399478198e847732d1b4efb23"} Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.006300 4955 generic.go:334] "Generic (PLEG): container finished" podID="15342842-84ab-4afe-b6ad-4999f805c766" containerID="db7cdefb9922cead2deb9193fcd19676e083f944494b91dc395ffc1376f8631d" exitCode=0 Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.006350 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-t8zxd" event={"ID":"15342842-84ab-4afe-b6ad-4999f805c766","Type":"ContainerDied","Data":"db7cdefb9922cead2deb9193fcd19676e083f944494b91dc395ffc1376f8631d"} Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.006374 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-t8zxd" event={"ID":"15342842-84ab-4afe-b6ad-4999f805c766","Type":"ContainerStarted","Data":"8edc2b832b08f30a54e6ff0c7fd704501e4661a6f1b6e8ad55126789e638ee30"} Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.006380 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.008199 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-a2bb-account-create-update-b25nd"] Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.009661 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-a2bb-account-create-update-b25nd" Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.011880 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.018201 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-a2bb-account-create-update-b25nd"] Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.027915 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f721ccf9-9b3f-412e-86ec-13501bf80899-operator-scripts\") pod \"aodh-db-create-chb64\" (UID: \"f721ccf9-9b3f-412e-86ec-13501bf80899\") " pod="openstack/aodh-db-create-chb64" Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.027966 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a84457b-b28a-497d-904f-6ba8d1dbd8b1-operator-scripts\") pod \"aodh-a2bb-account-create-update-b25nd\" (UID: \"4a84457b-b28a-497d-904f-6ba8d1dbd8b1\") " pod="openstack/aodh-a2bb-account-create-update-b25nd" Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.028056 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kgnr\" (UniqueName: \"kubernetes.io/projected/f721ccf9-9b3f-412e-86ec-13501bf80899-kube-api-access-9kgnr\") pod \"aodh-db-create-chb64\" (UID: \"f721ccf9-9b3f-412e-86ec-13501bf80899\") " pod="openstack/aodh-db-create-chb64" Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.028114 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49zdf\" (UniqueName: \"kubernetes.io/projected/4a84457b-b28a-497d-904f-6ba8d1dbd8b1-kube-api-access-49zdf\") pod \"aodh-a2bb-account-create-update-b25nd\" (UID: \"4a84457b-b28a-497d-904f-6ba8d1dbd8b1\") " pod="openstack/aodh-a2bb-account-create-update-b25nd" Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.029792 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f721ccf9-9b3f-412e-86ec-13501bf80899-operator-scripts\") pod \"aodh-db-create-chb64\" (UID: \"f721ccf9-9b3f-412e-86ec-13501bf80899\") " pod="openstack/aodh-db-create-chb64" Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.049416 4955 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="60285b06-983a-4344-959e-ed58414afdd9" podUID="82e2ca6a-27a0-4777-92e8-eb25aa3e2b73" Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.053280 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kgnr\" (UniqueName: \"kubernetes.io/projected/f721ccf9-9b3f-412e-86ec-13501bf80899-kube-api-access-9kgnr\") pod \"aodh-db-create-chb64\" (UID: \"f721ccf9-9b3f-412e-86ec-13501bf80899\") " pod="openstack/aodh-db-create-chb64" Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.058291 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.058270295 podStartE2EDuration="2.058270295s" podCreationTimestamp="2026-02-02 13:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:45:02.030802496 +0000 UTC m=+2552.943138956" watchObservedRunningTime="2026-02-02 13:45:02.058270295 +0000 UTC m=+2552.970606745" Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.129708 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a84457b-b28a-497d-904f-6ba8d1dbd8b1-operator-scripts\") pod \"aodh-a2bb-account-create-update-b25nd\" (UID: \"4a84457b-b28a-497d-904f-6ba8d1dbd8b1\") " pod="openstack/aodh-a2bb-account-create-update-b25nd" Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.129806 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49zdf\" (UniqueName: \"kubernetes.io/projected/4a84457b-b28a-497d-904f-6ba8d1dbd8b1-kube-api-access-49zdf\") pod \"aodh-a2bb-account-create-update-b25nd\" (UID: \"4a84457b-b28a-497d-904f-6ba8d1dbd8b1\") " pod="openstack/aodh-a2bb-account-create-update-b25nd" Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.130884 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a84457b-b28a-497d-904f-6ba8d1dbd8b1-operator-scripts\") pod \"aodh-a2bb-account-create-update-b25nd\" (UID: \"4a84457b-b28a-497d-904f-6ba8d1dbd8b1\") " pod="openstack/aodh-a2bb-account-create-update-b25nd" Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.149068 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49zdf\" (UniqueName: \"kubernetes.io/projected/4a84457b-b28a-497d-904f-6ba8d1dbd8b1-kube-api-access-49zdf\") pod \"aodh-a2bb-account-create-update-b25nd\" (UID: \"4a84457b-b28a-497d-904f-6ba8d1dbd8b1\") " pod="openstack/aodh-a2bb-account-create-update-b25nd" Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.247537 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-chb64" Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.332854 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-a2bb-account-create-update-b25nd" Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.712039 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-chb64"] Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.852222 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-a2bb-account-create-update-b25nd"] Feb 02 13:45:02 crc kubenswrapper[4955]: W0202 13:45:02.891072 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a84457b_b28a_497d_904f_6ba8d1dbd8b1.slice/crio-7987342bf137cd4549e93ee8e84a1c6dfee99bbdd9955447d2c4a3117eef9ad9 WatchSource:0}: Error finding container 7987342bf137cd4549e93ee8e84a1c6dfee99bbdd9955447d2c4a3117eef9ad9: Status 404 returned error can't find the container with id 7987342bf137cd4549e93ee8e84a1c6dfee99bbdd9955447d2c4a3117eef9ad9 Feb 02 13:45:02 crc kubenswrapper[4955]: I0202 13:45:02.949010 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.018285 4955 generic.go:334] "Generic (PLEG): container finished" podID="19fa3e77-422a-425a-8e53-cefd5d880462" containerID="6e7845e3e261d8aa807e1320bcf6b4a6f5baad4a239e3d74acac3a47b84e30a1" exitCode=137 Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.018369 4955 scope.go:117] "RemoveContainer" containerID="6e7845e3e261d8aa807e1320bcf6b4a6f5baad4a239e3d74acac3a47b84e30a1" Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.018504 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.023029 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-a2bb-account-create-update-b25nd" event={"ID":"4a84457b-b28a-497d-904f-6ba8d1dbd8b1","Type":"ContainerStarted","Data":"7987342bf137cd4549e93ee8e84a1c6dfee99bbdd9955447d2c4a3117eef9ad9"} Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.026311 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-chb64" event={"ID":"f721ccf9-9b3f-412e-86ec-13501bf80899","Type":"ContainerStarted","Data":"5bc5ab19eac95562b35a9770b4e094b30bfd5474120898d5472b4e2f490cb3a5"} Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.050845 4955 scope.go:117] "RemoveContainer" containerID="6e7845e3e261d8aa807e1320bcf6b4a6f5baad4a239e3d74acac3a47b84e30a1" Feb 02 13:45:03 crc kubenswrapper[4955]: E0202 13:45:03.051626 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e7845e3e261d8aa807e1320bcf6b4a6f5baad4a239e3d74acac3a47b84e30a1\": container with ID starting with 6e7845e3e261d8aa807e1320bcf6b4a6f5baad4a239e3d74acac3a47b84e30a1 not found: ID does not exist" containerID="6e7845e3e261d8aa807e1320bcf6b4a6f5baad4a239e3d74acac3a47b84e30a1" Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.051663 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e7845e3e261d8aa807e1320bcf6b4a6f5baad4a239e3d74acac3a47b84e30a1"} err="failed to get container status \"6e7845e3e261d8aa807e1320bcf6b4a6f5baad4a239e3d74acac3a47b84e30a1\": rpc error: code = NotFound desc = could not find container \"6e7845e3e261d8aa807e1320bcf6b4a6f5baad4a239e3d74acac3a47b84e30a1\": container with ID starting with 6e7845e3e261d8aa807e1320bcf6b4a6f5baad4a239e3d74acac3a47b84e30a1 not found: ID does not exist" Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.052101 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19fa3e77-422a-425a-8e53-cefd5d880462-combined-ca-bundle\") pod \"19fa3e77-422a-425a-8e53-cefd5d880462\" (UID: \"19fa3e77-422a-425a-8e53-cefd5d880462\") " Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.052235 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/19fa3e77-422a-425a-8e53-cefd5d880462-openstack-config\") pod \"19fa3e77-422a-425a-8e53-cefd5d880462\" (UID: \"19fa3e77-422a-425a-8e53-cefd5d880462\") " Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.052776 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/19fa3e77-422a-425a-8e53-cefd5d880462-openstack-config-secret\") pod \"19fa3e77-422a-425a-8e53-cefd5d880462\" (UID: \"19fa3e77-422a-425a-8e53-cefd5d880462\") " Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.052872 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6dxw\" (UniqueName: \"kubernetes.io/projected/19fa3e77-422a-425a-8e53-cefd5d880462-kube-api-access-d6dxw\") pod \"19fa3e77-422a-425a-8e53-cefd5d880462\" (UID: \"19fa3e77-422a-425a-8e53-cefd5d880462\") " Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.062506 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19fa3e77-422a-425a-8e53-cefd5d880462-kube-api-access-d6dxw" (OuterVolumeSpecName: "kube-api-access-d6dxw") pod "19fa3e77-422a-425a-8e53-cefd5d880462" (UID: "19fa3e77-422a-425a-8e53-cefd5d880462"). InnerVolumeSpecName "kube-api-access-d6dxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.085099 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19fa3e77-422a-425a-8e53-cefd5d880462-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "19fa3e77-422a-425a-8e53-cefd5d880462" (UID: "19fa3e77-422a-425a-8e53-cefd5d880462"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.090075 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19fa3e77-422a-425a-8e53-cefd5d880462-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19fa3e77-422a-425a-8e53-cefd5d880462" (UID: "19fa3e77-422a-425a-8e53-cefd5d880462"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.154857 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19fa3e77-422a-425a-8e53-cefd5d880462-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "19fa3e77-422a-425a-8e53-cefd5d880462" (UID: "19fa3e77-422a-425a-8e53-cefd5d880462"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.155434 4955 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/19fa3e77-422a-425a-8e53-cefd5d880462-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.155456 4955 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/19fa3e77-422a-425a-8e53-cefd5d880462-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.155470 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6dxw\" (UniqueName: \"kubernetes.io/projected/19fa3e77-422a-425a-8e53-cefd5d880462-kube-api-access-d6dxw\") on node \"crc\" DevicePath \"\"" Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.155481 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19fa3e77-422a-425a-8e53-cefd5d880462-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.504860 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-t8zxd" Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.524146 4955 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="19fa3e77-422a-425a-8e53-cefd5d880462" podUID="82e2ca6a-27a0-4777-92e8-eb25aa3e2b73" Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.663267 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15342842-84ab-4afe-b6ad-4999f805c766-secret-volume\") pod \"15342842-84ab-4afe-b6ad-4999f805c766\" (UID: \"15342842-84ab-4afe-b6ad-4999f805c766\") " Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.663359 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5q8p\" (UniqueName: \"kubernetes.io/projected/15342842-84ab-4afe-b6ad-4999f805c766-kube-api-access-z5q8p\") pod \"15342842-84ab-4afe-b6ad-4999f805c766\" (UID: \"15342842-84ab-4afe-b6ad-4999f805c766\") " Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.663626 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15342842-84ab-4afe-b6ad-4999f805c766-config-volume\") pod \"15342842-84ab-4afe-b6ad-4999f805c766\" (UID: \"15342842-84ab-4afe-b6ad-4999f805c766\") " Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.664289 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15342842-84ab-4afe-b6ad-4999f805c766-config-volume" (OuterVolumeSpecName: "config-volume") pod "15342842-84ab-4afe-b6ad-4999f805c766" (UID: "15342842-84ab-4afe-b6ad-4999f805c766"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.664679 4955 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15342842-84ab-4afe-b6ad-4999f805c766-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.667826 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15342842-84ab-4afe-b6ad-4999f805c766-kube-api-access-z5q8p" (OuterVolumeSpecName: "kube-api-access-z5q8p") pod "15342842-84ab-4afe-b6ad-4999f805c766" (UID: "15342842-84ab-4afe-b6ad-4999f805c766"). InnerVolumeSpecName "kube-api-access-z5q8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.668213 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15342842-84ab-4afe-b6ad-4999f805c766-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "15342842-84ab-4afe-b6ad-4999f805c766" (UID: "15342842-84ab-4afe-b6ad-4999f805c766"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.740949 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19fa3e77-422a-425a-8e53-cefd5d880462" path="/var/lib/kubelet/pods/19fa3e77-422a-425a-8e53-cefd5d880462/volumes" Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.774577 4955 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15342842-84ab-4afe-b6ad-4999f805c766-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 13:45:03 crc kubenswrapper[4955]: I0202 13:45:03.774608 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5q8p\" (UniqueName: \"kubernetes.io/projected/15342842-84ab-4afe-b6ad-4999f805c766-kube-api-access-z5q8p\") on node \"crc\" DevicePath \"\"" Feb 02 13:45:04 crc kubenswrapper[4955]: I0202 13:45:04.041313 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-t8zxd" event={"ID":"15342842-84ab-4afe-b6ad-4999f805c766","Type":"ContainerDied","Data":"8edc2b832b08f30a54e6ff0c7fd704501e4661a6f1b6e8ad55126789e638ee30"} Feb 02 13:45:04 crc kubenswrapper[4955]: I0202 13:45:04.041366 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8edc2b832b08f30a54e6ff0c7fd704501e4661a6f1b6e8ad55126789e638ee30" Feb 02 13:45:04 crc kubenswrapper[4955]: I0202 13:45:04.041361 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-t8zxd" Feb 02 13:45:04 crc kubenswrapper[4955]: I0202 13:45:04.043330 4955 generic.go:334] "Generic (PLEG): container finished" podID="4a84457b-b28a-497d-904f-6ba8d1dbd8b1" containerID="05b536f76890245fb716e87d9e5ad75f20aa87dbd9941c63c5f4216a95d40233" exitCode=0 Feb 02 13:45:04 crc kubenswrapper[4955]: I0202 13:45:04.043455 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-a2bb-account-create-update-b25nd" event={"ID":"4a84457b-b28a-497d-904f-6ba8d1dbd8b1","Type":"ContainerDied","Data":"05b536f76890245fb716e87d9e5ad75f20aa87dbd9941c63c5f4216a95d40233"} Feb 02 13:45:04 crc kubenswrapper[4955]: I0202 13:45:04.044893 4955 generic.go:334] "Generic (PLEG): container finished" podID="f721ccf9-9b3f-412e-86ec-13501bf80899" containerID="06ce090717891cfe6f34fa9d77b2d5f3c4ddd84022c70448a7ffcae4ff08f496" exitCode=0 Feb 02 13:45:04 crc kubenswrapper[4955]: I0202 13:45:04.044943 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-chb64" event={"ID":"f721ccf9-9b3f-412e-86ec-13501bf80899","Type":"ContainerDied","Data":"06ce090717891cfe6f34fa9d77b2d5f3c4ddd84022c70448a7ffcae4ff08f496"} Feb 02 13:45:04 crc kubenswrapper[4955]: I0202 13:45:04.573707 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500620-mzf7f"] Feb 02 13:45:04 crc kubenswrapper[4955]: I0202 13:45:04.581027 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500620-mzf7f"] Feb 02 13:45:05 crc kubenswrapper[4955]: I0202 13:45:05.454396 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-chb64" Feb 02 13:45:05 crc kubenswrapper[4955]: I0202 13:45:05.462153 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-a2bb-account-create-update-b25nd" Feb 02 13:45:05 crc kubenswrapper[4955]: I0202 13:45:05.516136 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49zdf\" (UniqueName: \"kubernetes.io/projected/4a84457b-b28a-497d-904f-6ba8d1dbd8b1-kube-api-access-49zdf\") pod \"4a84457b-b28a-497d-904f-6ba8d1dbd8b1\" (UID: \"4a84457b-b28a-497d-904f-6ba8d1dbd8b1\") " Feb 02 13:45:05 crc kubenswrapper[4955]: I0202 13:45:05.516303 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a84457b-b28a-497d-904f-6ba8d1dbd8b1-operator-scripts\") pod \"4a84457b-b28a-497d-904f-6ba8d1dbd8b1\" (UID: \"4a84457b-b28a-497d-904f-6ba8d1dbd8b1\") " Feb 02 13:45:05 crc kubenswrapper[4955]: I0202 13:45:05.516395 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f721ccf9-9b3f-412e-86ec-13501bf80899-operator-scripts\") pod \"f721ccf9-9b3f-412e-86ec-13501bf80899\" (UID: \"f721ccf9-9b3f-412e-86ec-13501bf80899\") " Feb 02 13:45:05 crc kubenswrapper[4955]: I0202 13:45:05.516464 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kgnr\" (UniqueName: \"kubernetes.io/projected/f721ccf9-9b3f-412e-86ec-13501bf80899-kube-api-access-9kgnr\") pod \"f721ccf9-9b3f-412e-86ec-13501bf80899\" (UID: \"f721ccf9-9b3f-412e-86ec-13501bf80899\") " Feb 02 13:45:05 crc kubenswrapper[4955]: I0202 13:45:05.517150 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a84457b-b28a-497d-904f-6ba8d1dbd8b1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a84457b-b28a-497d-904f-6ba8d1dbd8b1" (UID: "4a84457b-b28a-497d-904f-6ba8d1dbd8b1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:45:05 crc kubenswrapper[4955]: I0202 13:45:05.517681 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f721ccf9-9b3f-412e-86ec-13501bf80899-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f721ccf9-9b3f-412e-86ec-13501bf80899" (UID: "f721ccf9-9b3f-412e-86ec-13501bf80899"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:45:05 crc kubenswrapper[4955]: I0202 13:45:05.524256 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f721ccf9-9b3f-412e-86ec-13501bf80899-kube-api-access-9kgnr" (OuterVolumeSpecName: "kube-api-access-9kgnr") pod "f721ccf9-9b3f-412e-86ec-13501bf80899" (UID: "f721ccf9-9b3f-412e-86ec-13501bf80899"). InnerVolumeSpecName "kube-api-access-9kgnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:45:05 crc kubenswrapper[4955]: I0202 13:45:05.524316 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a84457b-b28a-497d-904f-6ba8d1dbd8b1-kube-api-access-49zdf" (OuterVolumeSpecName: "kube-api-access-49zdf") pod "4a84457b-b28a-497d-904f-6ba8d1dbd8b1" (UID: "4a84457b-b28a-497d-904f-6ba8d1dbd8b1"). InnerVolumeSpecName "kube-api-access-49zdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:45:05 crc kubenswrapper[4955]: I0202 13:45:05.619188 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49zdf\" (UniqueName: \"kubernetes.io/projected/4a84457b-b28a-497d-904f-6ba8d1dbd8b1-kube-api-access-49zdf\") on node \"crc\" DevicePath \"\"" Feb 02 13:45:05 crc kubenswrapper[4955]: I0202 13:45:05.619248 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a84457b-b28a-497d-904f-6ba8d1dbd8b1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:45:05 crc kubenswrapper[4955]: I0202 13:45:05.619263 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f721ccf9-9b3f-412e-86ec-13501bf80899-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:45:05 crc kubenswrapper[4955]: I0202 13:45:05.619274 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kgnr\" (UniqueName: \"kubernetes.io/projected/f721ccf9-9b3f-412e-86ec-13501bf80899-kube-api-access-9kgnr\") on node \"crc\" DevicePath \"\"" Feb 02 13:45:05 crc kubenswrapper[4955]: I0202 13:45:05.731475 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb" path="/var/lib/kubelet/pods/46cf5e31-21a3-4f2d-b5cc-4a35eb58ccdb/volumes" Feb 02 13:45:06 crc kubenswrapper[4955]: I0202 13:45:06.063970 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-a2bb-account-create-update-b25nd" Feb 02 13:45:06 crc kubenswrapper[4955]: I0202 13:45:06.064383 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-a2bb-account-create-update-b25nd" event={"ID":"4a84457b-b28a-497d-904f-6ba8d1dbd8b1","Type":"ContainerDied","Data":"7987342bf137cd4549e93ee8e84a1c6dfee99bbdd9955447d2c4a3117eef9ad9"} Feb 02 13:45:06 crc kubenswrapper[4955]: I0202 13:45:06.064597 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7987342bf137cd4549e93ee8e84a1c6dfee99bbdd9955447d2c4a3117eef9ad9" Feb 02 13:45:06 crc kubenswrapper[4955]: I0202 13:45:06.065644 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-chb64" event={"ID":"f721ccf9-9b3f-412e-86ec-13501bf80899","Type":"ContainerDied","Data":"5bc5ab19eac95562b35a9770b4e094b30bfd5474120898d5472b4e2f490cb3a5"} Feb 02 13:45:06 crc kubenswrapper[4955]: I0202 13:45:06.065685 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bc5ab19eac95562b35a9770b4e094b30bfd5474120898d5472b4e2f490cb3a5" Feb 02 13:45:06 crc kubenswrapper[4955]: I0202 13:45:06.065799 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-chb64" Feb 02 13:45:40 crc kubenswrapper[4955]: I0202 13:45:40.780901 4955 scope.go:117] "RemoveContainer" containerID="547d31ab6b277dd1010df3b53f91b5cf201d77244f4c6a5254af439377873da7" Feb 02 13:46:33 crc kubenswrapper[4955]: I0202 13:46:33.017582 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:46:33 crc kubenswrapper[4955]: I0202 13:46:33.020087 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:47:03 crc kubenswrapper[4955]: I0202 13:47:03.016681 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:47:03 crc kubenswrapper[4955]: I0202 13:47:03.017099 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:47:33 crc kubenswrapper[4955]: I0202 13:47:33.016881 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:47:33 crc kubenswrapper[4955]: I0202 13:47:33.017537 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:47:33 crc kubenswrapper[4955]: I0202 13:47:33.017603 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:47:33 crc kubenswrapper[4955]: I0202 13:47:33.018289 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a69cc62a3e0e4f04e1954470b35a9d41be0703c2cd0b2c63c5801f304b94da6"} pod="openshift-machine-config-operator/machine-config-daemon-6l62h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:47:33 crc kubenswrapper[4955]: I0202 13:47:33.018353 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" containerID="cri-o://8a69cc62a3e0e4f04e1954470b35a9d41be0703c2cd0b2c63c5801f304b94da6" gracePeriod=600 Feb 02 13:47:33 crc kubenswrapper[4955]: I0202 13:47:33.303409 4955 generic.go:334] "Generic (PLEG): container finished" podID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerID="8a69cc62a3e0e4f04e1954470b35a9d41be0703c2cd0b2c63c5801f304b94da6" exitCode=0 Feb 02 13:47:33 crc kubenswrapper[4955]: I0202 13:47:33.303491 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerDied","Data":"8a69cc62a3e0e4f04e1954470b35a9d41be0703c2cd0b2c63c5801f304b94da6"} Feb 02 13:47:33 crc kubenswrapper[4955]: I0202 13:47:33.303768 4955 scope.go:117] "RemoveContainer" containerID="a26b1d787ed3677e8fe5b86258e79eaad04e753ec6b437d65877a913a74f7ab9" Feb 02 13:47:34 crc kubenswrapper[4955]: I0202 13:47:34.315811 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerStarted","Data":"0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902"} Feb 02 13:49:02 crc kubenswrapper[4955]: I0202 13:49:02.375471 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx_0e5f1bee-07dd-4eaf-9a3b-328845abb141/manager/0.log" Feb 02 13:49:13 crc kubenswrapper[4955]: I0202 13:49:13.959425 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4"] Feb 02 13:49:13 crc kubenswrapper[4955]: E0202 13:49:13.960434 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f721ccf9-9b3f-412e-86ec-13501bf80899" containerName="mariadb-database-create" Feb 02 13:49:13 crc kubenswrapper[4955]: I0202 13:49:13.960452 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f721ccf9-9b3f-412e-86ec-13501bf80899" containerName="mariadb-database-create" Feb 02 13:49:13 crc kubenswrapper[4955]: E0202 13:49:13.960495 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15342842-84ab-4afe-b6ad-4999f805c766" containerName="collect-profiles" Feb 02 13:49:13 crc kubenswrapper[4955]: I0202 13:49:13.960503 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="15342842-84ab-4afe-b6ad-4999f805c766" containerName="collect-profiles" Feb 02 13:49:13 crc kubenswrapper[4955]: E0202 13:49:13.960532 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a84457b-b28a-497d-904f-6ba8d1dbd8b1" containerName="mariadb-account-create-update" Feb 02 13:49:13 crc kubenswrapper[4955]: I0202 13:49:13.960541 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a84457b-b28a-497d-904f-6ba8d1dbd8b1" containerName="mariadb-account-create-update" Feb 02 13:49:13 crc kubenswrapper[4955]: I0202 13:49:13.960781 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="15342842-84ab-4afe-b6ad-4999f805c766" containerName="collect-profiles" Feb 02 13:49:13 crc kubenswrapper[4955]: I0202 13:49:13.960799 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a84457b-b28a-497d-904f-6ba8d1dbd8b1" containerName="mariadb-account-create-update" Feb 02 13:49:13 crc kubenswrapper[4955]: I0202 13:49:13.960817 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f721ccf9-9b3f-412e-86ec-13501bf80899" containerName="mariadb-database-create" Feb 02 13:49:13 crc kubenswrapper[4955]: I0202 13:49:13.962458 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4" Feb 02 13:49:13 crc kubenswrapper[4955]: I0202 13:49:13.965154 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 13:49:13 crc kubenswrapper[4955]: I0202 13:49:13.976912 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4"] Feb 02 13:49:14 crc kubenswrapper[4955]: I0202 13:49:14.114736 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwtp9\" (UniqueName: \"kubernetes.io/projected/4dbf1102-145e-49b6-87bd-1a7ce30e48b9-kube-api-access-dwtp9\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4\" (UID: \"4dbf1102-145e-49b6-87bd-1a7ce30e48b9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4" Feb 02 13:49:14 crc kubenswrapper[4955]: I0202 13:49:14.114939 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4dbf1102-145e-49b6-87bd-1a7ce30e48b9-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4\" (UID: \"4dbf1102-145e-49b6-87bd-1a7ce30e48b9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4" Feb 02 13:49:14 crc kubenswrapper[4955]: I0202 13:49:14.115000 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4dbf1102-145e-49b6-87bd-1a7ce30e48b9-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4\" (UID: \"4dbf1102-145e-49b6-87bd-1a7ce30e48b9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4" Feb 02 13:49:14 crc kubenswrapper[4955]: I0202 13:49:14.217286 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwtp9\" (UniqueName: \"kubernetes.io/projected/4dbf1102-145e-49b6-87bd-1a7ce30e48b9-kube-api-access-dwtp9\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4\" (UID: \"4dbf1102-145e-49b6-87bd-1a7ce30e48b9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4" Feb 02 13:49:14 crc kubenswrapper[4955]: I0202 13:49:14.217359 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4dbf1102-145e-49b6-87bd-1a7ce30e48b9-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4\" (UID: \"4dbf1102-145e-49b6-87bd-1a7ce30e48b9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4" Feb 02 13:49:14 crc kubenswrapper[4955]: I0202 13:49:14.217383 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4dbf1102-145e-49b6-87bd-1a7ce30e48b9-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4\" (UID: \"4dbf1102-145e-49b6-87bd-1a7ce30e48b9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4" Feb 02 13:49:14 crc kubenswrapper[4955]: I0202 13:49:14.217943 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4dbf1102-145e-49b6-87bd-1a7ce30e48b9-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4\" (UID: \"4dbf1102-145e-49b6-87bd-1a7ce30e48b9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4" Feb 02 13:49:14 crc kubenswrapper[4955]: I0202 13:49:14.218465 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4dbf1102-145e-49b6-87bd-1a7ce30e48b9-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4\" (UID: \"4dbf1102-145e-49b6-87bd-1a7ce30e48b9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4" Feb 02 13:49:14 crc kubenswrapper[4955]: I0202 13:49:14.244573 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwtp9\" (UniqueName: \"kubernetes.io/projected/4dbf1102-145e-49b6-87bd-1a7ce30e48b9-kube-api-access-dwtp9\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4\" (UID: \"4dbf1102-145e-49b6-87bd-1a7ce30e48b9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4" Feb 02 13:49:14 crc kubenswrapper[4955]: I0202 13:49:14.283254 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4" Feb 02 13:49:14 crc kubenswrapper[4955]: I0202 13:49:14.753843 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4"] Feb 02 13:49:15 crc kubenswrapper[4955]: I0202 13:49:15.191512 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4" event={"ID":"4dbf1102-145e-49b6-87bd-1a7ce30e48b9","Type":"ContainerStarted","Data":"b1bf0700a4ae3d0d62e6b77f80f471b85f2bf9ef79fe4ae2af8f1daf1209e8f7"} Feb 02 13:49:15 crc kubenswrapper[4955]: I0202 13:49:15.192118 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4" event={"ID":"4dbf1102-145e-49b6-87bd-1a7ce30e48b9","Type":"ContainerStarted","Data":"142ddb1ed5fc01e1600ec8bf1ffed7e58bc4a7832e444f064de1a9d1202ed797"} Feb 02 13:49:16 crc kubenswrapper[4955]: I0202 13:49:16.200467 4955 generic.go:334] "Generic (PLEG): container finished" podID="4dbf1102-145e-49b6-87bd-1a7ce30e48b9" containerID="b1bf0700a4ae3d0d62e6b77f80f471b85f2bf9ef79fe4ae2af8f1daf1209e8f7" exitCode=0 Feb 02 13:49:16 crc kubenswrapper[4955]: I0202 13:49:16.200570 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4" event={"ID":"4dbf1102-145e-49b6-87bd-1a7ce30e48b9","Type":"ContainerDied","Data":"b1bf0700a4ae3d0d62e6b77f80f471b85f2bf9ef79fe4ae2af8f1daf1209e8f7"} Feb 02 13:49:16 crc kubenswrapper[4955]: I0202 13:49:16.202688 4955 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 13:49:16 crc kubenswrapper[4955]: I0202 13:49:16.313834 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-22rg9"] Feb 02 13:49:16 crc kubenswrapper[4955]: I0202 13:49:16.315963 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-22rg9" Feb 02 13:49:16 crc kubenswrapper[4955]: I0202 13:49:16.332134 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-22rg9"] Feb 02 13:49:16 crc kubenswrapper[4955]: I0202 13:49:16.457758 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3887b11-d447-46f7-844d-d07d4a1d180c-catalog-content\") pod \"redhat-operators-22rg9\" (UID: \"d3887b11-d447-46f7-844d-d07d4a1d180c\") " pod="openshift-marketplace/redhat-operators-22rg9" Feb 02 13:49:16 crc kubenswrapper[4955]: I0202 13:49:16.457839 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3887b11-d447-46f7-844d-d07d4a1d180c-utilities\") pod \"redhat-operators-22rg9\" (UID: \"d3887b11-d447-46f7-844d-d07d4a1d180c\") " pod="openshift-marketplace/redhat-operators-22rg9" Feb 02 13:49:16 crc kubenswrapper[4955]: I0202 13:49:16.457958 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl66x\" (UniqueName: \"kubernetes.io/projected/d3887b11-d447-46f7-844d-d07d4a1d180c-kube-api-access-zl66x\") pod \"redhat-operators-22rg9\" (UID: \"d3887b11-d447-46f7-844d-d07d4a1d180c\") " pod="openshift-marketplace/redhat-operators-22rg9" Feb 02 13:49:16 crc kubenswrapper[4955]: I0202 13:49:16.559778 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl66x\" (UniqueName: \"kubernetes.io/projected/d3887b11-d447-46f7-844d-d07d4a1d180c-kube-api-access-zl66x\") pod \"redhat-operators-22rg9\" (UID: \"d3887b11-d447-46f7-844d-d07d4a1d180c\") " pod="openshift-marketplace/redhat-operators-22rg9" Feb 02 13:49:16 crc kubenswrapper[4955]: I0202 13:49:16.559923 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3887b11-d447-46f7-844d-d07d4a1d180c-catalog-content\") pod \"redhat-operators-22rg9\" (UID: \"d3887b11-d447-46f7-844d-d07d4a1d180c\") " pod="openshift-marketplace/redhat-operators-22rg9" Feb 02 13:49:16 crc kubenswrapper[4955]: I0202 13:49:16.559991 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3887b11-d447-46f7-844d-d07d4a1d180c-utilities\") pod \"redhat-operators-22rg9\" (UID: \"d3887b11-d447-46f7-844d-d07d4a1d180c\") " pod="openshift-marketplace/redhat-operators-22rg9" Feb 02 13:49:16 crc kubenswrapper[4955]: I0202 13:49:16.560477 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3887b11-d447-46f7-844d-d07d4a1d180c-catalog-content\") pod \"redhat-operators-22rg9\" (UID: \"d3887b11-d447-46f7-844d-d07d4a1d180c\") " pod="openshift-marketplace/redhat-operators-22rg9" Feb 02 13:49:16 crc kubenswrapper[4955]: I0202 13:49:16.560596 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3887b11-d447-46f7-844d-d07d4a1d180c-utilities\") pod \"redhat-operators-22rg9\" (UID: \"d3887b11-d447-46f7-844d-d07d4a1d180c\") " pod="openshift-marketplace/redhat-operators-22rg9" Feb 02 13:49:16 crc kubenswrapper[4955]: I0202 13:49:16.581808 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl66x\" (UniqueName: \"kubernetes.io/projected/d3887b11-d447-46f7-844d-d07d4a1d180c-kube-api-access-zl66x\") pod \"redhat-operators-22rg9\" (UID: \"d3887b11-d447-46f7-844d-d07d4a1d180c\") " pod="openshift-marketplace/redhat-operators-22rg9" Feb 02 13:49:16 crc kubenswrapper[4955]: I0202 13:49:16.643869 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-22rg9" Feb 02 13:49:17 crc kubenswrapper[4955]: W0202 13:49:17.121202 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3887b11_d447_46f7_844d_d07d4a1d180c.slice/crio-88bb085203217da64bd76e156ee1ab6db129ea7de7dca4169ee0310a73ebd607 WatchSource:0}: Error finding container 88bb085203217da64bd76e156ee1ab6db129ea7de7dca4169ee0310a73ebd607: Status 404 returned error can't find the container with id 88bb085203217da64bd76e156ee1ab6db129ea7de7dca4169ee0310a73ebd607 Feb 02 13:49:17 crc kubenswrapper[4955]: I0202 13:49:17.127307 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-22rg9"] Feb 02 13:49:17 crc kubenswrapper[4955]: I0202 13:49:17.214693 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22rg9" event={"ID":"d3887b11-d447-46f7-844d-d07d4a1d180c","Type":"ContainerStarted","Data":"88bb085203217da64bd76e156ee1ab6db129ea7de7dca4169ee0310a73ebd607"} Feb 02 13:49:18 crc kubenswrapper[4955]: I0202 13:49:18.225661 4955 generic.go:334] "Generic (PLEG): container finished" podID="4dbf1102-145e-49b6-87bd-1a7ce30e48b9" containerID="eab99c49dc6da1ea4fd41c2f4662613da91c26e96d61126fff012ba24d14b460" exitCode=0 Feb 02 13:49:18 crc kubenswrapper[4955]: I0202 13:49:18.225820 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4" event={"ID":"4dbf1102-145e-49b6-87bd-1a7ce30e48b9","Type":"ContainerDied","Data":"eab99c49dc6da1ea4fd41c2f4662613da91c26e96d61126fff012ba24d14b460"} Feb 02 13:49:18 crc kubenswrapper[4955]: I0202 13:49:18.229628 4955 generic.go:334] "Generic (PLEG): container finished" podID="d3887b11-d447-46f7-844d-d07d4a1d180c" containerID="8d554b7ff5443e5b714cda85520b282fd73abbb3a31e347ee90063f587dcfcc8" exitCode=0 Feb 02 13:49:18 crc kubenswrapper[4955]: I0202 13:49:18.229677 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22rg9" event={"ID":"d3887b11-d447-46f7-844d-d07d4a1d180c","Type":"ContainerDied","Data":"8d554b7ff5443e5b714cda85520b282fd73abbb3a31e347ee90063f587dcfcc8"} Feb 02 13:49:19 crc kubenswrapper[4955]: I0202 13:49:19.241296 4955 generic.go:334] "Generic (PLEG): container finished" podID="4dbf1102-145e-49b6-87bd-1a7ce30e48b9" containerID="0ec786daf9550bfc138d618e78d384e53e00ee1e2444147d57337f4c126d3a82" exitCode=0 Feb 02 13:49:19 crc kubenswrapper[4955]: I0202 13:49:19.241408 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4" event={"ID":"4dbf1102-145e-49b6-87bd-1a7ce30e48b9","Type":"ContainerDied","Data":"0ec786daf9550bfc138d618e78d384e53e00ee1e2444147d57337f4c126d3a82"} Feb 02 13:49:20 crc kubenswrapper[4955]: I0202 13:49:20.590319 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4" Feb 02 13:49:20 crc kubenswrapper[4955]: I0202 13:49:20.744702 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4dbf1102-145e-49b6-87bd-1a7ce30e48b9-util\") pod \"4dbf1102-145e-49b6-87bd-1a7ce30e48b9\" (UID: \"4dbf1102-145e-49b6-87bd-1a7ce30e48b9\") " Feb 02 13:49:20 crc kubenswrapper[4955]: I0202 13:49:20.744817 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4dbf1102-145e-49b6-87bd-1a7ce30e48b9-bundle\") pod \"4dbf1102-145e-49b6-87bd-1a7ce30e48b9\" (UID: \"4dbf1102-145e-49b6-87bd-1a7ce30e48b9\") " Feb 02 13:49:20 crc kubenswrapper[4955]: I0202 13:49:20.744927 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwtp9\" (UniqueName: \"kubernetes.io/projected/4dbf1102-145e-49b6-87bd-1a7ce30e48b9-kube-api-access-dwtp9\") pod \"4dbf1102-145e-49b6-87bd-1a7ce30e48b9\" (UID: \"4dbf1102-145e-49b6-87bd-1a7ce30e48b9\") " Feb 02 13:49:20 crc kubenswrapper[4955]: I0202 13:49:20.746850 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dbf1102-145e-49b6-87bd-1a7ce30e48b9-bundle" (OuterVolumeSpecName: "bundle") pod "4dbf1102-145e-49b6-87bd-1a7ce30e48b9" (UID: "4dbf1102-145e-49b6-87bd-1a7ce30e48b9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:49:20 crc kubenswrapper[4955]: I0202 13:49:20.752952 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dbf1102-145e-49b6-87bd-1a7ce30e48b9-kube-api-access-dwtp9" (OuterVolumeSpecName: "kube-api-access-dwtp9") pod "4dbf1102-145e-49b6-87bd-1a7ce30e48b9" (UID: "4dbf1102-145e-49b6-87bd-1a7ce30e48b9"). InnerVolumeSpecName "kube-api-access-dwtp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:49:20 crc kubenswrapper[4955]: I0202 13:49:20.755622 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dbf1102-145e-49b6-87bd-1a7ce30e48b9-util" (OuterVolumeSpecName: "util") pod "4dbf1102-145e-49b6-87bd-1a7ce30e48b9" (UID: "4dbf1102-145e-49b6-87bd-1a7ce30e48b9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:49:20 crc kubenswrapper[4955]: I0202 13:49:20.847807 4955 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4dbf1102-145e-49b6-87bd-1a7ce30e48b9-util\") on node \"crc\" DevicePath \"\"" Feb 02 13:49:20 crc kubenswrapper[4955]: I0202 13:49:20.847840 4955 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4dbf1102-145e-49b6-87bd-1a7ce30e48b9-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:49:20 crc kubenswrapper[4955]: I0202 13:49:20.847852 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwtp9\" (UniqueName: \"kubernetes.io/projected/4dbf1102-145e-49b6-87bd-1a7ce30e48b9-kube-api-access-dwtp9\") on node \"crc\" DevicePath \"\"" Feb 02 13:49:21 crc kubenswrapper[4955]: I0202 13:49:21.260297 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4" event={"ID":"4dbf1102-145e-49b6-87bd-1a7ce30e48b9","Type":"ContainerDied","Data":"142ddb1ed5fc01e1600ec8bf1ffed7e58bc4a7832e444f064de1a9d1202ed797"} Feb 02 13:49:21 crc kubenswrapper[4955]: I0202 13:49:21.260721 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="142ddb1ed5fc01e1600ec8bf1ffed7e58bc4a7832e444f064de1a9d1202ed797" Feb 02 13:49:21 crc kubenswrapper[4955]: I0202 13:49:21.260353 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4" Feb 02 13:49:31 crc kubenswrapper[4955]: I0202 13:49:31.831085 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-27mc7"] Feb 02 13:49:31 crc kubenswrapper[4955]: E0202 13:49:31.832996 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dbf1102-145e-49b6-87bd-1a7ce30e48b9" containerName="pull" Feb 02 13:49:31 crc kubenswrapper[4955]: I0202 13:49:31.833017 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dbf1102-145e-49b6-87bd-1a7ce30e48b9" containerName="pull" Feb 02 13:49:31 crc kubenswrapper[4955]: E0202 13:49:31.833049 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dbf1102-145e-49b6-87bd-1a7ce30e48b9" containerName="extract" Feb 02 13:49:31 crc kubenswrapper[4955]: I0202 13:49:31.833058 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dbf1102-145e-49b6-87bd-1a7ce30e48b9" containerName="extract" Feb 02 13:49:31 crc kubenswrapper[4955]: E0202 13:49:31.833088 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dbf1102-145e-49b6-87bd-1a7ce30e48b9" containerName="util" Feb 02 13:49:31 crc kubenswrapper[4955]: I0202 13:49:31.833097 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dbf1102-145e-49b6-87bd-1a7ce30e48b9" containerName="util" Feb 02 13:49:31 crc kubenswrapper[4955]: I0202 13:49:31.833338 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dbf1102-145e-49b6-87bd-1a7ce30e48b9" containerName="extract" Feb 02 13:49:31 crc kubenswrapper[4955]: I0202 13:49:31.834158 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-27mc7" Feb 02 13:49:31 crc kubenswrapper[4955]: I0202 13:49:31.840355 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-fp8xh" Feb 02 13:49:31 crc kubenswrapper[4955]: I0202 13:49:31.840598 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 02 13:49:31 crc kubenswrapper[4955]: I0202 13:49:31.840898 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 02 13:49:31 crc kubenswrapper[4955]: I0202 13:49:31.847414 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-27mc7"] Feb 02 13:49:31 crc kubenswrapper[4955]: I0202 13:49:31.992544 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv4s9\" (UniqueName: \"kubernetes.io/projected/e6cc386e-df97-4820-8215-ad295f03667a-kube-api-access-zv4s9\") pod \"obo-prometheus-operator-68bc856cb9-27mc7\" (UID: \"e6cc386e-df97-4820-8215-ad295f03667a\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-27mc7" Feb 02 13:49:31 crc kubenswrapper[4955]: I0202 13:49:31.963538 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-7hvlk"] Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.005509 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-7hvlk" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.010642 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.011198 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-7x6mj" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.031396 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-qlxnv"] Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.036155 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-qlxnv" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.053649 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-7hvlk"] Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.062221 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-qlxnv"] Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.095808 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/853d1cb1-23c3-44ac-8b6d-5b645a3757f7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54db95d4-7hvlk\" (UID: \"853d1cb1-23c3-44ac-8b6d-5b645a3757f7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-7hvlk" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.095861 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv4s9\" (UniqueName: \"kubernetes.io/projected/e6cc386e-df97-4820-8215-ad295f03667a-kube-api-access-zv4s9\") pod \"obo-prometheus-operator-68bc856cb9-27mc7\" (UID: \"e6cc386e-df97-4820-8215-ad295f03667a\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-27mc7" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.095914 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/853d1cb1-23c3-44ac-8b6d-5b645a3757f7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54db95d4-7hvlk\" (UID: \"853d1cb1-23c3-44ac-8b6d-5b645a3757f7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-7hvlk" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.119201 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv4s9\" (UniqueName: \"kubernetes.io/projected/e6cc386e-df97-4820-8215-ad295f03667a-kube-api-access-zv4s9\") pod \"obo-prometheus-operator-68bc856cb9-27mc7\" (UID: \"e6cc386e-df97-4820-8215-ad295f03667a\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-27mc7" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.163175 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-27mc7" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.172679 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-hr5gh"] Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.175229 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-hr5gh" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.181024 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-b7wdc" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.181317 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.198179 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98fef4d4-9fa6-4f3b-8f57-ca22ea29b499-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54db95d4-qlxnv\" (UID: \"98fef4d4-9fa6-4f3b-8f57-ca22ea29b499\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-qlxnv" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.198270 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/853d1cb1-23c3-44ac-8b6d-5b645a3757f7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54db95d4-7hvlk\" (UID: \"853d1cb1-23c3-44ac-8b6d-5b645a3757f7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-7hvlk" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.198332 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/853d1cb1-23c3-44ac-8b6d-5b645a3757f7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54db95d4-7hvlk\" (UID: \"853d1cb1-23c3-44ac-8b6d-5b645a3757f7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-7hvlk" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.198381 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/98fef4d4-9fa6-4f3b-8f57-ca22ea29b499-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54db95d4-qlxnv\" (UID: \"98fef4d4-9fa6-4f3b-8f57-ca22ea29b499\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-qlxnv" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.221697 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/853d1cb1-23c3-44ac-8b6d-5b645a3757f7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54db95d4-7hvlk\" (UID: \"853d1cb1-23c3-44ac-8b6d-5b645a3757f7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-7hvlk" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.225320 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/853d1cb1-23c3-44ac-8b6d-5b645a3757f7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54db95d4-7hvlk\" (UID: \"853d1cb1-23c3-44ac-8b6d-5b645a3757f7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-7hvlk" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.254078 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-hr5gh"] Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.300874 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99d79\" (UniqueName: \"kubernetes.io/projected/102afca0-ebad-4201-86e1-a07bff67d684-kube-api-access-99d79\") pod \"observability-operator-59bdc8b94-hr5gh\" (UID: \"102afca0-ebad-4201-86e1-a07bff67d684\") " pod="openshift-operators/observability-operator-59bdc8b94-hr5gh" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.300930 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/102afca0-ebad-4201-86e1-a07bff67d684-observability-operator-tls\") pod \"observability-operator-59bdc8b94-hr5gh\" (UID: \"102afca0-ebad-4201-86e1-a07bff67d684\") " pod="openshift-operators/observability-operator-59bdc8b94-hr5gh" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.301046 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/98fef4d4-9fa6-4f3b-8f57-ca22ea29b499-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54db95d4-qlxnv\" (UID: \"98fef4d4-9fa6-4f3b-8f57-ca22ea29b499\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-qlxnv" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.301101 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98fef4d4-9fa6-4f3b-8f57-ca22ea29b499-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54db95d4-qlxnv\" (UID: \"98fef4d4-9fa6-4f3b-8f57-ca22ea29b499\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-qlxnv" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.306051 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98fef4d4-9fa6-4f3b-8f57-ca22ea29b499-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54db95d4-qlxnv\" (UID: \"98fef4d4-9fa6-4f3b-8f57-ca22ea29b499\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-qlxnv" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.316904 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/98fef4d4-9fa6-4f3b-8f57-ca22ea29b499-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54db95d4-qlxnv\" (UID: \"98fef4d4-9fa6-4f3b-8f57-ca22ea29b499\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-qlxnv" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.352078 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-7hvlk" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.375227 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-8jmvb"] Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.376783 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8jmvb" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.379998 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-qlxnv" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.383353 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-zz7qf" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.408234 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/102afca0-ebad-4201-86e1-a07bff67d684-observability-operator-tls\") pod \"observability-operator-59bdc8b94-hr5gh\" (UID: \"102afca0-ebad-4201-86e1-a07bff67d684\") " pod="openshift-operators/observability-operator-59bdc8b94-hr5gh" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.408316 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99d79\" (UniqueName: \"kubernetes.io/projected/102afca0-ebad-4201-86e1-a07bff67d684-kube-api-access-99d79\") pod \"observability-operator-59bdc8b94-hr5gh\" (UID: \"102afca0-ebad-4201-86e1-a07bff67d684\") " pod="openshift-operators/observability-operator-59bdc8b94-hr5gh" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.414471 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/102afca0-ebad-4201-86e1-a07bff67d684-observability-operator-tls\") pod \"observability-operator-59bdc8b94-hr5gh\" (UID: \"102afca0-ebad-4201-86e1-a07bff67d684\") " pod="openshift-operators/observability-operator-59bdc8b94-hr5gh" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.414570 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-8jmvb"] Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.442384 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22rg9" event={"ID":"d3887b11-d447-46f7-844d-d07d4a1d180c","Type":"ContainerStarted","Data":"8d0c8751b35fd82d35197718ae65c48538ecebb1c5f9fac11ff1c7fcafccb32e"} Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.452774 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99d79\" (UniqueName: \"kubernetes.io/projected/102afca0-ebad-4201-86e1-a07bff67d684-kube-api-access-99d79\") pod \"observability-operator-59bdc8b94-hr5gh\" (UID: \"102afca0-ebad-4201-86e1-a07bff67d684\") " pod="openshift-operators/observability-operator-59bdc8b94-hr5gh" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.517669 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c857eb66-83ed-4042-9cdf-371ab6f7cbba-openshift-service-ca\") pod \"perses-operator-5bf474d74f-8jmvb\" (UID: \"c857eb66-83ed-4042-9cdf-371ab6f7cbba\") " pod="openshift-operators/perses-operator-5bf474d74f-8jmvb" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.517770 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hlb4\" (UniqueName: \"kubernetes.io/projected/c857eb66-83ed-4042-9cdf-371ab6f7cbba-kube-api-access-2hlb4\") pod \"perses-operator-5bf474d74f-8jmvb\" (UID: \"c857eb66-83ed-4042-9cdf-371ab6f7cbba\") " pod="openshift-operators/perses-operator-5bf474d74f-8jmvb" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.605926 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-hr5gh" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.620076 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c857eb66-83ed-4042-9cdf-371ab6f7cbba-openshift-service-ca\") pod \"perses-operator-5bf474d74f-8jmvb\" (UID: \"c857eb66-83ed-4042-9cdf-371ab6f7cbba\") " pod="openshift-operators/perses-operator-5bf474d74f-8jmvb" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.620177 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hlb4\" (UniqueName: \"kubernetes.io/projected/c857eb66-83ed-4042-9cdf-371ab6f7cbba-kube-api-access-2hlb4\") pod \"perses-operator-5bf474d74f-8jmvb\" (UID: \"c857eb66-83ed-4042-9cdf-371ab6f7cbba\") " pod="openshift-operators/perses-operator-5bf474d74f-8jmvb" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.621407 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c857eb66-83ed-4042-9cdf-371ab6f7cbba-openshift-service-ca\") pod \"perses-operator-5bf474d74f-8jmvb\" (UID: \"c857eb66-83ed-4042-9cdf-371ab6f7cbba\") " pod="openshift-operators/perses-operator-5bf474d74f-8jmvb" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.660335 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hlb4\" (UniqueName: \"kubernetes.io/projected/c857eb66-83ed-4042-9cdf-371ab6f7cbba-kube-api-access-2hlb4\") pod \"perses-operator-5bf474d74f-8jmvb\" (UID: \"c857eb66-83ed-4042-9cdf-371ab6f7cbba\") " pod="openshift-operators/perses-operator-5bf474d74f-8jmvb" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.731844 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8jmvb" Feb 02 13:49:32 crc kubenswrapper[4955]: I0202 13:49:32.849741 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-27mc7"] Feb 02 13:49:33 crc kubenswrapper[4955]: I0202 13:49:33.017820 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:49:33 crc kubenswrapper[4955]: I0202 13:49:33.017875 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:49:33 crc kubenswrapper[4955]: I0202 13:49:33.038075 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-7hvlk"] Feb 02 13:49:33 crc kubenswrapper[4955]: I0202 13:49:33.075437 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-hr5gh"] Feb 02 13:49:33 crc kubenswrapper[4955]: W0202 13:49:33.101475 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod102afca0_ebad_4201_86e1_a07bff67d684.slice/crio-1fa4d86e0840834d20e6f695e7e19b1c55326963c932efa8c6e80fe67f0505c3 WatchSource:0}: Error finding container 1fa4d86e0840834d20e6f695e7e19b1c55326963c932efa8c6e80fe67f0505c3: Status 404 returned error can't find the container with id 1fa4d86e0840834d20e6f695e7e19b1c55326963c932efa8c6e80fe67f0505c3 Feb 02 13:49:33 crc kubenswrapper[4955]: I0202 13:49:33.171409 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-qlxnv"] Feb 02 13:49:33 crc kubenswrapper[4955]: I0202 13:49:33.318441 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-8jmvb"] Feb 02 13:49:33 crc kubenswrapper[4955]: W0202 13:49:33.329420 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc857eb66_83ed_4042_9cdf_371ab6f7cbba.slice/crio-39f4b0aeee3dfe2b32061bfc903e4482dfb0b489f01e01c390987b8ae17b653b WatchSource:0}: Error finding container 39f4b0aeee3dfe2b32061bfc903e4482dfb0b489f01e01c390987b8ae17b653b: Status 404 returned error can't find the container with id 39f4b0aeee3dfe2b32061bfc903e4482dfb0b489f01e01c390987b8ae17b653b Feb 02 13:49:33 crc kubenswrapper[4955]: I0202 13:49:33.451778 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-qlxnv" event={"ID":"98fef4d4-9fa6-4f3b-8f57-ca22ea29b499","Type":"ContainerStarted","Data":"0c7cdb027bd794423eca04f6f6f2e0b3edc73a961e73684eb9ac38c842565dc0"} Feb 02 13:49:33 crc kubenswrapper[4955]: I0202 13:49:33.454180 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-7hvlk" event={"ID":"853d1cb1-23c3-44ac-8b6d-5b645a3757f7","Type":"ContainerStarted","Data":"73c8e5c6005a6bec4293a9f5f7a5d8e695033bb89d586571d1ae55a1150c453d"} Feb 02 13:49:33 crc kubenswrapper[4955]: I0202 13:49:33.455631 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-hr5gh" event={"ID":"102afca0-ebad-4201-86e1-a07bff67d684","Type":"ContainerStarted","Data":"1fa4d86e0840834d20e6f695e7e19b1c55326963c932efa8c6e80fe67f0505c3"} Feb 02 13:49:33 crc kubenswrapper[4955]: I0202 13:49:33.456708 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-8jmvb" event={"ID":"c857eb66-83ed-4042-9cdf-371ab6f7cbba","Type":"ContainerStarted","Data":"39f4b0aeee3dfe2b32061bfc903e4482dfb0b489f01e01c390987b8ae17b653b"} Feb 02 13:49:33 crc kubenswrapper[4955]: I0202 13:49:33.457675 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-27mc7" event={"ID":"e6cc386e-df97-4820-8215-ad295f03667a","Type":"ContainerStarted","Data":"5bd69453302b1bfa415a092d247ace2c7d67081aae219b33330ed3fe53d36293"} Feb 02 13:49:33 crc kubenswrapper[4955]: I0202 13:49:33.459279 4955 generic.go:334] "Generic (PLEG): container finished" podID="d3887b11-d447-46f7-844d-d07d4a1d180c" containerID="8d0c8751b35fd82d35197718ae65c48538ecebb1c5f9fac11ff1c7fcafccb32e" exitCode=0 Feb 02 13:49:33 crc kubenswrapper[4955]: I0202 13:49:33.459321 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22rg9" event={"ID":"d3887b11-d447-46f7-844d-d07d4a1d180c","Type":"ContainerDied","Data":"8d0c8751b35fd82d35197718ae65c48538ecebb1c5f9fac11ff1c7fcafccb32e"} Feb 02 13:49:35 crc kubenswrapper[4955]: I0202 13:49:35.488895 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22rg9" event={"ID":"d3887b11-d447-46f7-844d-d07d4a1d180c","Type":"ContainerStarted","Data":"fa7fe61d7d2e019b06f6b50a44c762518ec5b5b32bfeba44078a5546da123600"} Feb 02 13:49:35 crc kubenswrapper[4955]: I0202 13:49:35.518498 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-22rg9" podStartSLOduration=3.005611128 podStartE2EDuration="19.518483878s" podCreationTimestamp="2026-02-02 13:49:16 +0000 UTC" firstStartedPulling="2026-02-02 13:49:18.231004963 +0000 UTC m=+2809.143341413" lastFinishedPulling="2026-02-02 13:49:34.743877713 +0000 UTC m=+2825.656214163" observedRunningTime="2026-02-02 13:49:35.517547686 +0000 UTC m=+2826.429884146" watchObservedRunningTime="2026-02-02 13:49:35.518483878 +0000 UTC m=+2826.430820318" Feb 02 13:49:36 crc kubenswrapper[4955]: I0202 13:49:36.644055 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-22rg9" Feb 02 13:49:36 crc kubenswrapper[4955]: I0202 13:49:36.644102 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-22rg9" Feb 02 13:49:37 crc kubenswrapper[4955]: I0202 13:49:37.740977 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-22rg9" podUID="d3887b11-d447-46f7-844d-d07d4a1d180c" containerName="registry-server" probeResult="failure" output=< Feb 02 13:49:37 crc kubenswrapper[4955]: timeout: failed to connect service ":50051" within 1s Feb 02 13:49:37 crc kubenswrapper[4955]: > Feb 02 13:49:46 crc kubenswrapper[4955]: I0202 13:49:46.664837 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-qlxnv" event={"ID":"98fef4d4-9fa6-4f3b-8f57-ca22ea29b499","Type":"ContainerStarted","Data":"bd22262ee451d0c6eab45f8e426600cdef09bb05164f8bbcf91a869a553dbd91"} Feb 02 13:49:46 crc kubenswrapper[4955]: I0202 13:49:46.671454 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-7hvlk" event={"ID":"853d1cb1-23c3-44ac-8b6d-5b645a3757f7","Type":"ContainerStarted","Data":"5892cc1843af5893cbdd339664aff12eeae8e99aeab2e85aca07819631417bc4"} Feb 02 13:49:46 crc kubenswrapper[4955]: I0202 13:49:46.695518 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-hr5gh" event={"ID":"102afca0-ebad-4201-86e1-a07bff67d684","Type":"ContainerStarted","Data":"badc121e1ef882ac7f39ada0e71aaa65a0add58182ff46129b4f3907a02ae8db"} Feb 02 13:49:46 crc kubenswrapper[4955]: I0202 13:49:46.696450 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-hr5gh" Feb 02 13:49:46 crc kubenswrapper[4955]: I0202 13:49:46.698363 4955 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-hr5gh container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.1:8081/healthz\": dial tcp 10.217.1.1:8081: connect: connection refused" start-of-body= Feb 02 13:49:46 crc kubenswrapper[4955]: I0202 13:49:46.698428 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-hr5gh" podUID="102afca0-ebad-4201-86e1-a07bff67d684" containerName="operator" probeResult="failure" output="Get \"http://10.217.1.1:8081/healthz\": dial tcp 10.217.1.1:8081: connect: connection refused" Feb 02 13:49:46 crc kubenswrapper[4955]: I0202 13:49:46.718978 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-qlxnv" podStartSLOduration=2.7435604959999997 podStartE2EDuration="15.718950683s" podCreationTimestamp="2026-02-02 13:49:31 +0000 UTC" firstStartedPulling="2026-02-02 13:49:33.17877081 +0000 UTC m=+2824.091107260" lastFinishedPulling="2026-02-02 13:49:46.154160967 +0000 UTC m=+2837.066497447" observedRunningTime="2026-02-02 13:49:46.688850643 +0000 UTC m=+2837.601187093" watchObservedRunningTime="2026-02-02 13:49:46.718950683 +0000 UTC m=+2837.631287123" Feb 02 13:49:46 crc kubenswrapper[4955]: I0202 13:49:46.766799 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-22rg9" Feb 02 13:49:46 crc kubenswrapper[4955]: I0202 13:49:46.767519 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54db95d4-7hvlk" podStartSLOduration=2.653041154 podStartE2EDuration="15.767497235s" podCreationTimestamp="2026-02-02 13:49:31 +0000 UTC" firstStartedPulling="2026-02-02 13:49:33.040303971 +0000 UTC m=+2823.952640421" lastFinishedPulling="2026-02-02 13:49:46.154760052 +0000 UTC m=+2837.067096502" observedRunningTime="2026-02-02 13:49:46.726710043 +0000 UTC m=+2837.639046493" watchObservedRunningTime="2026-02-02 13:49:46.767497235 +0000 UTC m=+2837.679833685" Feb 02 13:49:46 crc kubenswrapper[4955]: I0202 13:49:46.797538 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-hr5gh" podStartSLOduration=1.674374236 podStartE2EDuration="14.797515151s" podCreationTimestamp="2026-02-02 13:49:32 +0000 UTC" firstStartedPulling="2026-02-02 13:49:33.104672951 +0000 UTC m=+2824.017009401" lastFinishedPulling="2026-02-02 13:49:46.227813866 +0000 UTC m=+2837.140150316" observedRunningTime="2026-02-02 13:49:46.76731483 +0000 UTC m=+2837.679651290" watchObservedRunningTime="2026-02-02 13:49:46.797515151 +0000 UTC m=+2837.709851601" Feb 02 13:49:46 crc kubenswrapper[4955]: I0202 13:49:46.847286 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-22rg9" Feb 02 13:49:47 crc kubenswrapper[4955]: I0202 13:49:47.707269 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-27mc7" event={"ID":"e6cc386e-df97-4820-8215-ad295f03667a","Type":"ContainerStarted","Data":"15fb9cb506a340d3497afe5a2b0cca3527e90bfe9a600190128a5745102dcdce"} Feb 02 13:49:47 crc kubenswrapper[4955]: I0202 13:49:47.709468 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-8jmvb" event={"ID":"c857eb66-83ed-4042-9cdf-371ab6f7cbba","Type":"ContainerStarted","Data":"1186dcd597d24e428d220f2f63b991d41fbf8336cc368f2eeff898cb09f2ded4"} Feb 02 13:49:47 crc kubenswrapper[4955]: I0202 13:49:47.710922 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-hr5gh" Feb 02 13:49:47 crc kubenswrapper[4955]: I0202 13:49:47.736504 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-27mc7" podStartSLOduration=3.43494958 podStartE2EDuration="16.736481333s" podCreationTimestamp="2026-02-02 13:49:31 +0000 UTC" firstStartedPulling="2026-02-02 13:49:32.862372353 +0000 UTC m=+2823.774708803" lastFinishedPulling="2026-02-02 13:49:46.163904106 +0000 UTC m=+2837.076240556" observedRunningTime="2026-02-02 13:49:47.726350323 +0000 UTC m=+2838.638686783" watchObservedRunningTime="2026-02-02 13:49:47.736481333 +0000 UTC m=+2838.648817783" Feb 02 13:49:47 crc kubenswrapper[4955]: I0202 13:49:47.792101 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-8jmvb" podStartSLOduration=2.959347962 podStartE2EDuration="15.792082157s" podCreationTimestamp="2026-02-02 13:49:32 +0000 UTC" firstStartedPulling="2026-02-02 13:49:33.333173751 +0000 UTC m=+2824.245510201" lastFinishedPulling="2026-02-02 13:49:46.165907946 +0000 UTC m=+2837.078244396" observedRunningTime="2026-02-02 13:49:47.787074845 +0000 UTC m=+2838.699411295" watchObservedRunningTime="2026-02-02 13:49:47.792082157 +0000 UTC m=+2838.704418607" Feb 02 13:49:48 crc kubenswrapper[4955]: I0202 13:49:48.337703 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-22rg9"] Feb 02 13:49:48 crc kubenswrapper[4955]: I0202 13:49:48.705955 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qgd84"] Feb 02 13:49:48 crc kubenswrapper[4955]: I0202 13:49:48.706238 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qgd84" podUID="ec7a8d60-ca53-4b69-aa03-06fdede2ae9a" containerName="registry-server" containerID="cri-o://4846e0d9474430287b49b23e9c34b6ff4156bfb57fb3493d3f96fd19e2d538e1" gracePeriod=2 Feb 02 13:49:48 crc kubenswrapper[4955]: I0202 13:49:48.721258 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-8jmvb" Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.248545 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qgd84" Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.345241 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec7a8d60-ca53-4b69-aa03-06fdede2ae9a-catalog-content\") pod \"ec7a8d60-ca53-4b69-aa03-06fdede2ae9a\" (UID: \"ec7a8d60-ca53-4b69-aa03-06fdede2ae9a\") " Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.345308 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rlgh\" (UniqueName: \"kubernetes.io/projected/ec7a8d60-ca53-4b69-aa03-06fdede2ae9a-kube-api-access-2rlgh\") pod \"ec7a8d60-ca53-4b69-aa03-06fdede2ae9a\" (UID: \"ec7a8d60-ca53-4b69-aa03-06fdede2ae9a\") " Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.345358 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec7a8d60-ca53-4b69-aa03-06fdede2ae9a-utilities\") pod \"ec7a8d60-ca53-4b69-aa03-06fdede2ae9a\" (UID: \"ec7a8d60-ca53-4b69-aa03-06fdede2ae9a\") " Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.346950 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec7a8d60-ca53-4b69-aa03-06fdede2ae9a-utilities" (OuterVolumeSpecName: "utilities") pod "ec7a8d60-ca53-4b69-aa03-06fdede2ae9a" (UID: "ec7a8d60-ca53-4b69-aa03-06fdede2ae9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.355182 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec7a8d60-ca53-4b69-aa03-06fdede2ae9a-kube-api-access-2rlgh" (OuterVolumeSpecName: "kube-api-access-2rlgh") pod "ec7a8d60-ca53-4b69-aa03-06fdede2ae9a" (UID: "ec7a8d60-ca53-4b69-aa03-06fdede2ae9a"). InnerVolumeSpecName "kube-api-access-2rlgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.447884 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec7a8d60-ca53-4b69-aa03-06fdede2ae9a-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.447916 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rlgh\" (UniqueName: \"kubernetes.io/projected/ec7a8d60-ca53-4b69-aa03-06fdede2ae9a-kube-api-access-2rlgh\") on node \"crc\" DevicePath \"\"" Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.466676 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec7a8d60-ca53-4b69-aa03-06fdede2ae9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec7a8d60-ca53-4b69-aa03-06fdede2ae9a" (UID: "ec7a8d60-ca53-4b69-aa03-06fdede2ae9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.549919 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec7a8d60-ca53-4b69-aa03-06fdede2ae9a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.731320 4955 generic.go:334] "Generic (PLEG): container finished" podID="ec7a8d60-ca53-4b69-aa03-06fdede2ae9a" containerID="4846e0d9474430287b49b23e9c34b6ff4156bfb57fb3493d3f96fd19e2d538e1" exitCode=0 Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.732189 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qgd84" Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.735474 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgd84" event={"ID":"ec7a8d60-ca53-4b69-aa03-06fdede2ae9a","Type":"ContainerDied","Data":"4846e0d9474430287b49b23e9c34b6ff4156bfb57fb3493d3f96fd19e2d538e1"} Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.735513 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgd84" event={"ID":"ec7a8d60-ca53-4b69-aa03-06fdede2ae9a","Type":"ContainerDied","Data":"4260b269c7a5e14c3d3b9f7dfbe38cde8fde8d7cdfbd160cde62a87564ca4fe1"} Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.735536 4955 scope.go:117] "RemoveContainer" containerID="4846e0d9474430287b49b23e9c34b6ff4156bfb57fb3493d3f96fd19e2d538e1" Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.759363 4955 scope.go:117] "RemoveContainer" containerID="f06c036c020e411f58579ae9a7c7c29712d07d2e570b3a905e923085026f0e8c" Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.764366 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qgd84"] Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.773746 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qgd84"] Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.796641 4955 scope.go:117] "RemoveContainer" containerID="8008f9b1f133155da38b2dc73bfa364941972d11fb7b7a1473e38ac9c4e7c253" Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.836276 4955 scope.go:117] "RemoveContainer" containerID="4846e0d9474430287b49b23e9c34b6ff4156bfb57fb3493d3f96fd19e2d538e1" Feb 02 13:49:49 crc kubenswrapper[4955]: E0202 13:49:49.836683 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4846e0d9474430287b49b23e9c34b6ff4156bfb57fb3493d3f96fd19e2d538e1\": container with ID starting with 4846e0d9474430287b49b23e9c34b6ff4156bfb57fb3493d3f96fd19e2d538e1 not found: ID does not exist" containerID="4846e0d9474430287b49b23e9c34b6ff4156bfb57fb3493d3f96fd19e2d538e1" Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.836732 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4846e0d9474430287b49b23e9c34b6ff4156bfb57fb3493d3f96fd19e2d538e1"} err="failed to get container status \"4846e0d9474430287b49b23e9c34b6ff4156bfb57fb3493d3f96fd19e2d538e1\": rpc error: code = NotFound desc = could not find container \"4846e0d9474430287b49b23e9c34b6ff4156bfb57fb3493d3f96fd19e2d538e1\": container with ID starting with 4846e0d9474430287b49b23e9c34b6ff4156bfb57fb3493d3f96fd19e2d538e1 not found: ID does not exist" Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.836772 4955 scope.go:117] "RemoveContainer" containerID="f06c036c020e411f58579ae9a7c7c29712d07d2e570b3a905e923085026f0e8c" Feb 02 13:49:49 crc kubenswrapper[4955]: E0202 13:49:49.837028 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f06c036c020e411f58579ae9a7c7c29712d07d2e570b3a905e923085026f0e8c\": container with ID starting with f06c036c020e411f58579ae9a7c7c29712d07d2e570b3a905e923085026f0e8c not found: ID does not exist" containerID="f06c036c020e411f58579ae9a7c7c29712d07d2e570b3a905e923085026f0e8c" Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.837060 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f06c036c020e411f58579ae9a7c7c29712d07d2e570b3a905e923085026f0e8c"} err="failed to get container status \"f06c036c020e411f58579ae9a7c7c29712d07d2e570b3a905e923085026f0e8c\": rpc error: code = NotFound desc = could not find container \"f06c036c020e411f58579ae9a7c7c29712d07d2e570b3a905e923085026f0e8c\": container with ID starting with f06c036c020e411f58579ae9a7c7c29712d07d2e570b3a905e923085026f0e8c not found: ID does not exist" Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.837079 4955 scope.go:117] "RemoveContainer" containerID="8008f9b1f133155da38b2dc73bfa364941972d11fb7b7a1473e38ac9c4e7c253" Feb 02 13:49:49 crc kubenswrapper[4955]: E0202 13:49:49.838425 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8008f9b1f133155da38b2dc73bfa364941972d11fb7b7a1473e38ac9c4e7c253\": container with ID starting with 8008f9b1f133155da38b2dc73bfa364941972d11fb7b7a1473e38ac9c4e7c253 not found: ID does not exist" containerID="8008f9b1f133155da38b2dc73bfa364941972d11fb7b7a1473e38ac9c4e7c253" Feb 02 13:49:49 crc kubenswrapper[4955]: I0202 13:49:49.838486 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8008f9b1f133155da38b2dc73bfa364941972d11fb7b7a1473e38ac9c4e7c253"} err="failed to get container status \"8008f9b1f133155da38b2dc73bfa364941972d11fb7b7a1473e38ac9c4e7c253\": rpc error: code = NotFound desc = could not find container \"8008f9b1f133155da38b2dc73bfa364941972d11fb7b7a1473e38ac9c4e7c253\": container with ID starting with 8008f9b1f133155da38b2dc73bfa364941972d11fb7b7a1473e38ac9c4e7c253 not found: ID does not exist" Feb 02 13:49:51 crc kubenswrapper[4955]: I0202 13:49:51.737437 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec7a8d60-ca53-4b69-aa03-06fdede2ae9a" path="/var/lib/kubelet/pods/ec7a8d60-ca53-4b69-aa03-06fdede2ae9a/volumes" Feb 02 13:49:52 crc kubenswrapper[4955]: I0202 13:49:52.735690 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-8jmvb" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.645526 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 02 13:50:01 crc kubenswrapper[4955]: E0202 13:50:01.646484 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec7a8d60-ca53-4b69-aa03-06fdede2ae9a" containerName="extract-content" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.646497 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec7a8d60-ca53-4b69-aa03-06fdede2ae9a" containerName="extract-content" Feb 02 13:50:01 crc kubenswrapper[4955]: E0202 13:50:01.646532 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec7a8d60-ca53-4b69-aa03-06fdede2ae9a" containerName="extract-utilities" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.646538 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec7a8d60-ca53-4b69-aa03-06fdede2ae9a" containerName="extract-utilities" Feb 02 13:50:01 crc kubenswrapper[4955]: E0202 13:50:01.646546 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec7a8d60-ca53-4b69-aa03-06fdede2ae9a" containerName="registry-server" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.646580 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec7a8d60-ca53-4b69-aa03-06fdede2ae9a" containerName="registry-server" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.646792 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec7a8d60-ca53-4b69-aa03-06fdede2ae9a" containerName="registry-server" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.648277 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.654652 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.655067 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.655091 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.655136 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.655501 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-sv9rh" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.660159 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.783226 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/faac39c6-c177-4578-8943-1745793fd9cf-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"faac39c6-c177-4578-8943-1745793fd9cf\") " pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.783278 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/faac39c6-c177-4578-8943-1745793fd9cf-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"faac39c6-c177-4578-8943-1745793fd9cf\") " pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.783333 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/faac39c6-c177-4578-8943-1745793fd9cf-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"faac39c6-c177-4578-8943-1745793fd9cf\") " pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.783354 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqrxg\" (UniqueName: \"kubernetes.io/projected/faac39c6-c177-4578-8943-1745793fd9cf-kube-api-access-pqrxg\") pod \"alertmanager-metric-storage-0\" (UID: \"faac39c6-c177-4578-8943-1745793fd9cf\") " pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.783402 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/faac39c6-c177-4578-8943-1745793fd9cf-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"faac39c6-c177-4578-8943-1745793fd9cf\") " pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.783449 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/faac39c6-c177-4578-8943-1745793fd9cf-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"faac39c6-c177-4578-8943-1745793fd9cf\") " pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.783491 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/faac39c6-c177-4578-8943-1745793fd9cf-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"faac39c6-c177-4578-8943-1745793fd9cf\") " pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.885046 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/faac39c6-c177-4578-8943-1745793fd9cf-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"faac39c6-c177-4578-8943-1745793fd9cf\") " pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.885129 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/faac39c6-c177-4578-8943-1745793fd9cf-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"faac39c6-c177-4578-8943-1745793fd9cf\") " pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.885153 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqrxg\" (UniqueName: \"kubernetes.io/projected/faac39c6-c177-4578-8943-1745793fd9cf-kube-api-access-pqrxg\") pod \"alertmanager-metric-storage-0\" (UID: \"faac39c6-c177-4578-8943-1745793fd9cf\") " pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.885200 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/faac39c6-c177-4578-8943-1745793fd9cf-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"faac39c6-c177-4578-8943-1745793fd9cf\") " pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.885248 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/faac39c6-c177-4578-8943-1745793fd9cf-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"faac39c6-c177-4578-8943-1745793fd9cf\") " pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.885292 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/faac39c6-c177-4578-8943-1745793fd9cf-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"faac39c6-c177-4578-8943-1745793fd9cf\") " pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.885336 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/faac39c6-c177-4578-8943-1745793fd9cf-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"faac39c6-c177-4578-8943-1745793fd9cf\") " pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.886166 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/faac39c6-c177-4578-8943-1745793fd9cf-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"faac39c6-c177-4578-8943-1745793fd9cf\") " pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.892271 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/faac39c6-c177-4578-8943-1745793fd9cf-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"faac39c6-c177-4578-8943-1745793fd9cf\") " pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.893015 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/faac39c6-c177-4578-8943-1745793fd9cf-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"faac39c6-c177-4578-8943-1745793fd9cf\") " pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.894671 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/faac39c6-c177-4578-8943-1745793fd9cf-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"faac39c6-c177-4578-8943-1745793fd9cf\") " pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.895420 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/faac39c6-c177-4578-8943-1745793fd9cf-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"faac39c6-c177-4578-8943-1745793fd9cf\") " pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.898782 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/faac39c6-c177-4578-8943-1745793fd9cf-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"faac39c6-c177-4578-8943-1745793fd9cf\") " pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.911476 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqrxg\" (UniqueName: \"kubernetes.io/projected/faac39c6-c177-4578-8943-1745793fd9cf-kube-api-access-pqrxg\") pod \"alertmanager-metric-storage-0\" (UID: \"faac39c6-c177-4578-8943-1745793fd9cf\") " pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:01 crc kubenswrapper[4955]: I0202 13:50:01.965274 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.180330 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.183366 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.192128 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.192327 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.192522 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.192729 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.192885 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.193151 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-pv5nd" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.193292 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.193441 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.216294 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.298088 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt6kq\" (UniqueName: \"kubernetes.io/projected/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-kube-api-access-bt6kq\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.298141 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.298179 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.298234 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.298263 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-config\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.298298 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.298344 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.298373 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.298399 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.298436 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.400177 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.400249 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.400279 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.400317 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.400339 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt6kq\" (UniqueName: \"kubernetes.io/projected/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-kube-api-access-bt6kq\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.400360 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.400384 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.400432 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.400456 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-config\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.400489 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.400859 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.401395 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.401505 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.406050 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.406654 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.407149 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.419717 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-config\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.421990 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.423353 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.433417 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt6kq\" (UniqueName: \"kubernetes.io/projected/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-kube-api-access-bt6kq\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.449499 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"prometheus-metric-storage-0\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.517031 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.518857 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 02 13:50:02 crc kubenswrapper[4955]: W0202 13:50:02.522801 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfaac39c6_c177_4578_8943_1745793fd9cf.slice/crio-c63f36d0be4a95b782ab9ca4f3e3e66dd995131d91a4f5d184bf56fd93ffdf80 WatchSource:0}: Error finding container c63f36d0be4a95b782ab9ca4f3e3e66dd995131d91a4f5d184bf56fd93ffdf80: Status 404 returned error can't find the container with id c63f36d0be4a95b782ab9ca4f3e3e66dd995131d91a4f5d184bf56fd93ffdf80 Feb 02 13:50:02 crc kubenswrapper[4955]: I0202 13:50:02.856319 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"faac39c6-c177-4578-8943-1745793fd9cf","Type":"ContainerStarted","Data":"c63f36d0be4a95b782ab9ca4f3e3e66dd995131d91a4f5d184bf56fd93ffdf80"} Feb 02 13:50:03 crc kubenswrapper[4955]: I0202 13:50:03.017023 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:50:03 crc kubenswrapper[4955]: I0202 13:50:03.017091 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:50:03 crc kubenswrapper[4955]: I0202 13:50:03.050898 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:50:03 crc kubenswrapper[4955]: I0202 13:50:03.876381 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ffcd7841-e67e-4156-bcd6-ae92a611a7d2","Type":"ContainerStarted","Data":"b35d61665e89672b8deaf302b93f15627837424aff662ea2bcbdda0b762433a9"} Feb 02 13:50:08 crc kubenswrapper[4955]: I0202 13:50:08.921758 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ffcd7841-e67e-4156-bcd6-ae92a611a7d2","Type":"ContainerStarted","Data":"69aa37678a192b82e95826729ec3bfe5c20e5381869149c4a83f49a83436e283"} Feb 02 13:50:08 crc kubenswrapper[4955]: I0202 13:50:08.923305 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"faac39c6-c177-4578-8943-1745793fd9cf","Type":"ContainerStarted","Data":"7e30f536d6b751218574b365e79e03ab1497b78801394aec658257bad2f0a4e0"} Feb 02 13:50:14 crc kubenswrapper[4955]: I0202 13:50:14.981126 4955 generic.go:334] "Generic (PLEG): container finished" podID="faac39c6-c177-4578-8943-1745793fd9cf" containerID="7e30f536d6b751218574b365e79e03ab1497b78801394aec658257bad2f0a4e0" exitCode=0 Feb 02 13:50:14 crc kubenswrapper[4955]: I0202 13:50:14.981182 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"faac39c6-c177-4578-8943-1745793fd9cf","Type":"ContainerDied","Data":"7e30f536d6b751218574b365e79e03ab1497b78801394aec658257bad2f0a4e0"} Feb 02 13:50:15 crc kubenswrapper[4955]: I0202 13:50:15.995198 4955 generic.go:334] "Generic (PLEG): container finished" podID="ffcd7841-e67e-4156-bcd6-ae92a611a7d2" containerID="69aa37678a192b82e95826729ec3bfe5c20e5381869149c4a83f49a83436e283" exitCode=0 Feb 02 13:50:15 crc kubenswrapper[4955]: I0202 13:50:15.995242 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ffcd7841-e67e-4156-bcd6-ae92a611a7d2","Type":"ContainerDied","Data":"69aa37678a192b82e95826729ec3bfe5c20e5381869149c4a83f49a83436e283"} Feb 02 13:50:18 crc kubenswrapper[4955]: I0202 13:50:18.016783 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"faac39c6-c177-4578-8943-1745793fd9cf","Type":"ContainerStarted","Data":"e6bcf5098f2c8832790c1957e7721a2f7a122ebd66e30550bb41122606053ce4"} Feb 02 13:50:21 crc kubenswrapper[4955]: I0202 13:50:21.047545 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"faac39c6-c177-4578-8943-1745793fd9cf","Type":"ContainerStarted","Data":"bc131e6031403d43372f104027e4f089a46c8da182af53217f7d09bd3cb63d7d"} Feb 02 13:50:21 crc kubenswrapper[4955]: I0202 13:50:21.048456 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:21 crc kubenswrapper[4955]: I0202 13:50:21.051123 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Feb 02 13:50:21 crc kubenswrapper[4955]: I0202 13:50:21.086236 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=5.338042318 podStartE2EDuration="20.086218326s" podCreationTimestamp="2026-02-02 13:50:01 +0000 UTC" firstStartedPulling="2026-02-02 13:50:02.529830529 +0000 UTC m=+2853.442166979" lastFinishedPulling="2026-02-02 13:50:17.278006537 +0000 UTC m=+2868.190342987" observedRunningTime="2026-02-02 13:50:21.075196686 +0000 UTC m=+2871.987533136" watchObservedRunningTime="2026-02-02 13:50:21.086218326 +0000 UTC m=+2871.998554766" Feb 02 13:50:23 crc kubenswrapper[4955]: I0202 13:50:23.065361 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ffcd7841-e67e-4156-bcd6-ae92a611a7d2","Type":"ContainerStarted","Data":"bf268d338ab163ff8870a03e6b628401a112a466fecd392786a8e696e4298292"} Feb 02 13:50:26 crc kubenswrapper[4955]: I0202 13:50:26.096049 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ffcd7841-e67e-4156-bcd6-ae92a611a7d2","Type":"ContainerStarted","Data":"e058aa55210d954ed53166cccd72526a6133915bc12c9c967d60a59515b3ec2f"} Feb 02 13:50:29 crc kubenswrapper[4955]: I0202 13:50:29.129723 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ffcd7841-e67e-4156-bcd6-ae92a611a7d2","Type":"ContainerStarted","Data":"d05e5817392c42432624a61c9499fa43ffb166d245175ace2b3e981ef2af0552"} Feb 02 13:50:29 crc kubenswrapper[4955]: I0202 13:50:29.152706 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=2.445571601 podStartE2EDuration="28.152685633s" podCreationTimestamp="2026-02-02 13:50:01 +0000 UTC" firstStartedPulling="2026-02-02 13:50:03.05370803 +0000 UTC m=+2853.966044480" lastFinishedPulling="2026-02-02 13:50:28.760822062 +0000 UTC m=+2879.673158512" observedRunningTime="2026-02-02 13:50:29.149419592 +0000 UTC m=+2880.061756062" watchObservedRunningTime="2026-02-02 13:50:29.152685633 +0000 UTC m=+2880.065022093" Feb 02 13:50:32 crc kubenswrapper[4955]: I0202 13:50:32.517615 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:32 crc kubenswrapper[4955]: I0202 13:50:32.518192 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:32 crc kubenswrapper[4955]: I0202 13:50:32.521515 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:33 crc kubenswrapper[4955]: I0202 13:50:33.016615 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:50:33 crc kubenswrapper[4955]: I0202 13:50:33.016903 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:50:33 crc kubenswrapper[4955]: I0202 13:50:33.017016 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:50:33 crc kubenswrapper[4955]: I0202 13:50:33.017764 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902"} pod="openshift-machine-config-operator/machine-config-daemon-6l62h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:50:33 crc kubenswrapper[4955]: I0202 13:50:33.017911 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" containerID="cri-o://0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" gracePeriod=600 Feb 02 13:50:33 crc kubenswrapper[4955]: E0202 13:50:33.143782 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:50:33 crc kubenswrapper[4955]: I0202 13:50:33.171445 4955 generic.go:334] "Generic (PLEG): container finished" podID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" exitCode=0 Feb 02 13:50:33 crc kubenswrapper[4955]: I0202 13:50:33.171530 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerDied","Data":"0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902"} Feb 02 13:50:33 crc kubenswrapper[4955]: I0202 13:50:33.171904 4955 scope.go:117] "RemoveContainer" containerID="8a69cc62a3e0e4f04e1954470b35a9d41be0703c2cd0b2c63c5801f304b94da6" Feb 02 13:50:33 crc kubenswrapper[4955]: I0202 13:50:33.172650 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:50:33 crc kubenswrapper[4955]: E0202 13:50:33.173014 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:50:33 crc kubenswrapper[4955]: I0202 13:50:33.174211 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.369538 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.370159 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="82e2ca6a-27a0-4777-92e8-eb25aa3e2b73" containerName="openstackclient" containerID="cri-o://f42e04e86d9f1fb67d30aa799dfd84cdfcd050682a0b69f6dabaad072ba5f3c2" gracePeriod=2 Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.384544 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.421821 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 02 13:50:34 crc kubenswrapper[4955]: E0202 13:50:34.422206 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e2ca6a-27a0-4777-92e8-eb25aa3e2b73" containerName="openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.422223 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e2ca6a-27a0-4777-92e8-eb25aa3e2b73" containerName="openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.422433 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e2ca6a-27a0-4777-92e8-eb25aa3e2b73" containerName="openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.423053 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.451879 4955 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="82e2ca6a-27a0-4777-92e8-eb25aa3e2b73" podUID="0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.472208 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.520315 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 02 13:50:34 crc kubenswrapper[4955]: E0202 13:50:34.521545 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-dlb6s openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.533336 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-openstack-config-secret\") pod \"openstackclient\" (UID: \"0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86\") " pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.533419 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlb6s\" (UniqueName: \"kubernetes.io/projected/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-kube-api-access-dlb6s\") pod \"openstackclient\" (UID: \"0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86\") " pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.533585 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86\") " pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.533612 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-openstack-config\") pod \"openstackclient\" (UID: \"0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86\") " pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.544732 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.572833 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.576310 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.586322 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.636734 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-openstack-config-secret\") pod \"openstackclient\" (UID: \"0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86\") " pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.636783 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b9b94fc7-9059-4c45-b19d-ccca3f345be1-openstack-config\") pod \"openstackclient\" (UID: \"b9b94fc7-9059-4c45-b19d-ccca3f345be1\") " pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.636804 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b94fc7-9059-4c45-b19d-ccca3f345be1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b9b94fc7-9059-4c45-b19d-ccca3f345be1\") " pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.636828 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlb6s\" (UniqueName: \"kubernetes.io/projected/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-kube-api-access-dlb6s\") pod \"openstackclient\" (UID: \"0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86\") " pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.636928 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2542f\" (UniqueName: \"kubernetes.io/projected/b9b94fc7-9059-4c45-b19d-ccca3f345be1-kube-api-access-2542f\") pod \"openstackclient\" (UID: \"b9b94fc7-9059-4c45-b19d-ccca3f345be1\") " pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.637002 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86\") " pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.637018 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-openstack-config\") pod \"openstackclient\" (UID: \"0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86\") " pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.637066 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b9b94fc7-9059-4c45-b19d-ccca3f345be1-openstack-config-secret\") pod \"openstackclient\" (UID: \"b9b94fc7-9059-4c45-b19d-ccca3f345be1\") " pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: E0202 13:50:34.638781 4955 projected.go:194] Error preparing data for projected volume kube-api-access-dlb6s for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86) does not match the UID in record. The object might have been deleted and then recreated Feb 02 13:50:34 crc kubenswrapper[4955]: E0202 13:50:34.638864 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-kube-api-access-dlb6s podName:0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86 nodeName:}" failed. No retries permitted until 2026-02-02 13:50:35.138842504 +0000 UTC m=+2886.051178954 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dlb6s" (UniqueName: "kubernetes.io/projected/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-kube-api-access-dlb6s") pod "openstackclient" (UID: "0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86) does not match the UID in record. The object might have been deleted and then recreated Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.639226 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-openstack-config\") pod \"openstackclient\" (UID: \"0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86\") " pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.644972 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86\") " pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.655352 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-openstack-config-secret\") pod \"openstackclient\" (UID: \"0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86\") " pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.738240 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b9b94fc7-9059-4c45-b19d-ccca3f345be1-openstack-config-secret\") pod \"openstackclient\" (UID: \"b9b94fc7-9059-4c45-b19d-ccca3f345be1\") " pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.738531 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b9b94fc7-9059-4c45-b19d-ccca3f345be1-openstack-config\") pod \"openstackclient\" (UID: \"b9b94fc7-9059-4c45-b19d-ccca3f345be1\") " pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.738550 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b94fc7-9059-4c45-b19d-ccca3f345be1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b9b94fc7-9059-4c45-b19d-ccca3f345be1\") " pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.738791 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2542f\" (UniqueName: \"kubernetes.io/projected/b9b94fc7-9059-4c45-b19d-ccca3f345be1-kube-api-access-2542f\") pod \"openstackclient\" (UID: \"b9b94fc7-9059-4c45-b19d-ccca3f345be1\") " pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.740136 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b9b94fc7-9059-4c45-b19d-ccca3f345be1-openstack-config\") pod \"openstackclient\" (UID: \"b9b94fc7-9059-4c45-b19d-ccca3f345be1\") " pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.744135 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b94fc7-9059-4c45-b19d-ccca3f345be1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b9b94fc7-9059-4c45-b19d-ccca3f345be1\") " pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.744682 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b9b94fc7-9059-4c45-b19d-ccca3f345be1-openstack-config-secret\") pod \"openstackclient\" (UID: \"b9b94fc7-9059-4c45-b19d-ccca3f345be1\") " pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.766243 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2542f\" (UniqueName: \"kubernetes.io/projected/b9b94fc7-9059-4c45-b19d-ccca3f345be1-kube-api-access-2542f\") pod \"openstackclient\" (UID: \"b9b94fc7-9059-4c45-b19d-ccca3f345be1\") " pod="openstack/openstackclient" Feb 02 13:50:34 crc kubenswrapper[4955]: I0202 13:50:34.896702 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:50:35 crc kubenswrapper[4955]: I0202 13:50:35.150101 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlb6s\" (UniqueName: \"kubernetes.io/projected/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-kube-api-access-dlb6s\") pod \"openstackclient\" (UID: \"0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86\") " pod="openstack/openstackclient" Feb 02 13:50:35 crc kubenswrapper[4955]: E0202 13:50:35.153258 4955 projected.go:194] Error preparing data for projected volume kube-api-access-dlb6s for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86) does not match the UID in record. The object might have been deleted and then recreated Feb 02 13:50:35 crc kubenswrapper[4955]: E0202 13:50:35.153331 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-kube-api-access-dlb6s podName:0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86 nodeName:}" failed. No retries permitted until 2026-02-02 13:50:36.153311403 +0000 UTC m=+2887.065647843 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-dlb6s" (UniqueName: "kubernetes.io/projected/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-kube-api-access-dlb6s") pod "openstackclient" (UID: "0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86) does not match the UID in record. The object might have been deleted and then recreated Feb 02 13:50:35 crc kubenswrapper[4955]: I0202 13:50:35.210722 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:50:35 crc kubenswrapper[4955]: I0202 13:50:35.214410 4955 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86" podUID="b9b94fc7-9059-4c45-b19d-ccca3f345be1" Feb 02 13:50:35 crc kubenswrapper[4955]: I0202 13:50:35.229421 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:50:35 crc kubenswrapper[4955]: I0202 13:50:35.354686 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-combined-ca-bundle\") pod \"0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86\" (UID: \"0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86\") " Feb 02 13:50:35 crc kubenswrapper[4955]: I0202 13:50:35.354724 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-openstack-config-secret\") pod \"0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86\" (UID: \"0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86\") " Feb 02 13:50:35 crc kubenswrapper[4955]: I0202 13:50:35.354794 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-openstack-config\") pod \"0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86\" (UID: \"0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86\") " Feb 02 13:50:35 crc kubenswrapper[4955]: I0202 13:50:35.355409 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86" (UID: "0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:50:35 crc kubenswrapper[4955]: I0202 13:50:35.355922 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlb6s\" (UniqueName: \"kubernetes.io/projected/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-kube-api-access-dlb6s\") on node \"crc\" DevicePath \"\"" Feb 02 13:50:35 crc kubenswrapper[4955]: I0202 13:50:35.355936 4955 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:50:35 crc kubenswrapper[4955]: I0202 13:50:35.361397 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86" (UID: "0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:50:35 crc kubenswrapper[4955]: I0202 13:50:35.361410 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86" (UID: "0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:50:35 crc kubenswrapper[4955]: I0202 13:50:35.457582 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:50:35 crc kubenswrapper[4955]: I0202 13:50:35.457613 4955 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 02 13:50:35 crc kubenswrapper[4955]: I0202 13:50:35.484889 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 13:50:35 crc kubenswrapper[4955]: I0202 13:50:35.558199 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:50:35 crc kubenswrapper[4955]: I0202 13:50:35.558686 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="ffcd7841-e67e-4156-bcd6-ae92a611a7d2" containerName="prometheus" containerID="cri-o://bf268d338ab163ff8870a03e6b628401a112a466fecd392786a8e696e4298292" gracePeriod=600 Feb 02 13:50:35 crc kubenswrapper[4955]: I0202 13:50:35.558743 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="ffcd7841-e67e-4156-bcd6-ae92a611a7d2" containerName="thanos-sidecar" containerID="cri-o://d05e5817392c42432624a61c9499fa43ffb166d245175ace2b3e981ef2af0552" gracePeriod=600 Feb 02 13:50:35 crc kubenswrapper[4955]: I0202 13:50:35.558769 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="ffcd7841-e67e-4156-bcd6-ae92a611a7d2" containerName="config-reloader" containerID="cri-o://e058aa55210d954ed53166cccd72526a6133915bc12c9c967d60a59515b3ec2f" gracePeriod=600 Feb 02 13:50:35 crc kubenswrapper[4955]: I0202 13:50:35.728978 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86" path="/var/lib/kubelet/pods/0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86/volumes" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.230338 4955 generic.go:334] "Generic (PLEG): container finished" podID="ffcd7841-e67e-4156-bcd6-ae92a611a7d2" containerID="d05e5817392c42432624a61c9499fa43ffb166d245175ace2b3e981ef2af0552" exitCode=0 Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.230367 4955 generic.go:334] "Generic (PLEG): container finished" podID="ffcd7841-e67e-4156-bcd6-ae92a611a7d2" containerID="e058aa55210d954ed53166cccd72526a6133915bc12c9c967d60a59515b3ec2f" exitCode=0 Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.230376 4955 generic.go:334] "Generic (PLEG): container finished" podID="ffcd7841-e67e-4156-bcd6-ae92a611a7d2" containerID="bf268d338ab163ff8870a03e6b628401a112a466fecd392786a8e696e4298292" exitCode=0 Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.230416 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ffcd7841-e67e-4156-bcd6-ae92a611a7d2","Type":"ContainerDied","Data":"d05e5817392c42432624a61c9499fa43ffb166d245175ace2b3e981ef2af0552"} Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.230441 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ffcd7841-e67e-4156-bcd6-ae92a611a7d2","Type":"ContainerDied","Data":"e058aa55210d954ed53166cccd72526a6133915bc12c9c967d60a59515b3ec2f"} Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.230467 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ffcd7841-e67e-4156-bcd6-ae92a611a7d2","Type":"ContainerDied","Data":"bf268d338ab163ff8870a03e6b628401a112a466fecd392786a8e696e4298292"} Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.232100 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.232199 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b9b94fc7-9059-4c45-b19d-ccca3f345be1","Type":"ContainerStarted","Data":"32580c5dbddecb73cdd89b639f3363eb83215a1e81ad3340e896bdde85b09fa5"} Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.232286 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b9b94fc7-9059-4c45-b19d-ccca3f345be1","Type":"ContainerStarted","Data":"efeb3845a46f0b286d18894b3ac2c12100e2f05d16aaeb6a48ace27340efa782"} Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.266055 4955 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="0da2a7bf-ec42-42c5-9e6d-50d9b0f3db86" podUID="b9b94fc7-9059-4c45-b19d-ccca3f345be1" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.610052 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.658143 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.658120855 podStartE2EDuration="2.658120855s" podCreationTimestamp="2026-02-02 13:50:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:50:36.259954121 +0000 UTC m=+2887.172290571" watchObservedRunningTime="2026-02-02 13:50:36.658120855 +0000 UTC m=+2887.570457305" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.689297 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-config\") pod \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.689344 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-web-config\") pod \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.689376 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-config-out\") pod \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.689397 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-thanos-prometheus-http-client-file\") pod \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.689471 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-prometheus-metric-storage-rulefiles-2\") pod \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.689524 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt6kq\" (UniqueName: \"kubernetes.io/projected/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-kube-api-access-bt6kq\") pod \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.689602 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-tls-assets\") pod \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.689638 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-prometheus-metric-storage-rulefiles-0\") pod \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.689746 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-prometheus-metric-storage-rulefiles-1\") pod \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.689769 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\" (UID: \"ffcd7841-e67e-4156-bcd6-ae92a611a7d2\") " Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.694709 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "ffcd7841-e67e-4156-bcd6-ae92a611a7d2" (UID: "ffcd7841-e67e-4156-bcd6-ae92a611a7d2"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.702609 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "ffcd7841-e67e-4156-bcd6-ae92a611a7d2" (UID: "ffcd7841-e67e-4156-bcd6-ae92a611a7d2"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.703383 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "ffcd7841-e67e-4156-bcd6-ae92a611a7d2" (UID: "ffcd7841-e67e-4156-bcd6-ae92a611a7d2"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.703660 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "ffcd7841-e67e-4156-bcd6-ae92a611a7d2" (UID: "ffcd7841-e67e-4156-bcd6-ae92a611a7d2"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.706156 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-config-out" (OuterVolumeSpecName: "config-out") pod "ffcd7841-e67e-4156-bcd6-ae92a611a7d2" (UID: "ffcd7841-e67e-4156-bcd6-ae92a611a7d2"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.708077 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "ffcd7841-e67e-4156-bcd6-ae92a611a7d2" (UID: "ffcd7841-e67e-4156-bcd6-ae92a611a7d2"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.708780 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-kube-api-access-bt6kq" (OuterVolumeSpecName: "kube-api-access-bt6kq") pod "ffcd7841-e67e-4156-bcd6-ae92a611a7d2" (UID: "ffcd7841-e67e-4156-bcd6-ae92a611a7d2"). InnerVolumeSpecName "kube-api-access-bt6kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.709914 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "ffcd7841-e67e-4156-bcd6-ae92a611a7d2" (UID: "ffcd7841-e67e-4156-bcd6-ae92a611a7d2"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.713929 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-config" (OuterVolumeSpecName: "config") pod "ffcd7841-e67e-4156-bcd6-ae92a611a7d2" (UID: "ffcd7841-e67e-4156-bcd6-ae92a611a7d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.749843 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-web-config" (OuterVolumeSpecName: "web-config") pod "ffcd7841-e67e-4156-bcd6-ae92a611a7d2" (UID: "ffcd7841-e67e-4156-bcd6-ae92a611a7d2"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.795220 4955 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.795262 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt6kq\" (UniqueName: \"kubernetes.io/projected/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-kube-api-access-bt6kq\") on node \"crc\" DevicePath \"\"" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.795273 4955 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.795283 4955 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.795293 4955 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.795313 4955 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.795322 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.795331 4955 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-web-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.795340 4955 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-config-out\") on node \"crc\" DevicePath \"\"" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.795349 4955 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ffcd7841-e67e-4156-bcd6-ae92a611a7d2-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.826428 4955 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.832572 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.897085 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hk9m\" (UniqueName: \"kubernetes.io/projected/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-kube-api-access-6hk9m\") pod \"82e2ca6a-27a0-4777-92e8-eb25aa3e2b73\" (UID: \"82e2ca6a-27a0-4777-92e8-eb25aa3e2b73\") " Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.897143 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-openstack-config\") pod \"82e2ca6a-27a0-4777-92e8-eb25aa3e2b73\" (UID: \"82e2ca6a-27a0-4777-92e8-eb25aa3e2b73\") " Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.897233 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-openstack-config-secret\") pod \"82e2ca6a-27a0-4777-92e8-eb25aa3e2b73\" (UID: \"82e2ca6a-27a0-4777-92e8-eb25aa3e2b73\") " Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.897298 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-combined-ca-bundle\") pod \"82e2ca6a-27a0-4777-92e8-eb25aa3e2b73\" (UID: \"82e2ca6a-27a0-4777-92e8-eb25aa3e2b73\") " Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.897856 4955 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.908103 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-kube-api-access-6hk9m" (OuterVolumeSpecName: "kube-api-access-6hk9m") pod "82e2ca6a-27a0-4777-92e8-eb25aa3e2b73" (UID: "82e2ca6a-27a0-4777-92e8-eb25aa3e2b73"). InnerVolumeSpecName "kube-api-access-6hk9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.928589 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "82e2ca6a-27a0-4777-92e8-eb25aa3e2b73" (UID: "82e2ca6a-27a0-4777-92e8-eb25aa3e2b73"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:50:36 crc kubenswrapper[4955]: I0202 13:50:36.996032 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82e2ca6a-27a0-4777-92e8-eb25aa3e2b73" (UID: "82e2ca6a-27a0-4777-92e8-eb25aa3e2b73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.000964 4955 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.001213 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hk9m\" (UniqueName: \"kubernetes.io/projected/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-kube-api-access-6hk9m\") on node \"crc\" DevicePath \"\"" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.001321 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.043655 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "82e2ca6a-27a0-4777-92e8-eb25aa3e2b73" (UID: "82e2ca6a-27a0-4777-92e8-eb25aa3e2b73"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.103056 4955 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.246153 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ffcd7841-e67e-4156-bcd6-ae92a611a7d2","Type":"ContainerDied","Data":"b35d61665e89672b8deaf302b93f15627837424aff662ea2bcbdda0b762433a9"} Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.246205 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.246217 4955 scope.go:117] "RemoveContainer" containerID="d05e5817392c42432624a61c9499fa43ffb166d245175ace2b3e981ef2af0552" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.248212 4955 generic.go:334] "Generic (PLEG): container finished" podID="82e2ca6a-27a0-4777-92e8-eb25aa3e2b73" containerID="f42e04e86d9f1fb67d30aa799dfd84cdfcd050682a0b69f6dabaad072ba5f3c2" exitCode=137 Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.248853 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.272567 4955 scope.go:117] "RemoveContainer" containerID="e058aa55210d954ed53166cccd72526a6133915bc12c9c967d60a59515b3ec2f" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.287138 4955 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="82e2ca6a-27a0-4777-92e8-eb25aa3e2b73" podUID="b9b94fc7-9059-4c45-b19d-ccca3f345be1" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.291458 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.294823 4955 scope.go:117] "RemoveContainer" containerID="bf268d338ab163ff8870a03e6b628401a112a466fecd392786a8e696e4298292" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.308491 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.318549 4955 scope.go:117] "RemoveContainer" containerID="69aa37678a192b82e95826729ec3bfe5c20e5381869149c4a83f49a83436e283" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.319187 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:50:37 crc kubenswrapper[4955]: E0202 13:50:37.319610 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffcd7841-e67e-4156-bcd6-ae92a611a7d2" containerName="config-reloader" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.319622 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcd7841-e67e-4156-bcd6-ae92a611a7d2" containerName="config-reloader" Feb 02 13:50:37 crc kubenswrapper[4955]: E0202 13:50:37.319638 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffcd7841-e67e-4156-bcd6-ae92a611a7d2" containerName="init-config-reloader" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.319645 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcd7841-e67e-4156-bcd6-ae92a611a7d2" containerName="init-config-reloader" Feb 02 13:50:37 crc kubenswrapper[4955]: E0202 13:50:37.319660 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffcd7841-e67e-4156-bcd6-ae92a611a7d2" containerName="prometheus" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.319666 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcd7841-e67e-4156-bcd6-ae92a611a7d2" containerName="prometheus" Feb 02 13:50:37 crc kubenswrapper[4955]: E0202 13:50:37.319673 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffcd7841-e67e-4156-bcd6-ae92a611a7d2" containerName="thanos-sidecar" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.319678 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcd7841-e67e-4156-bcd6-ae92a611a7d2" containerName="thanos-sidecar" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.319852 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffcd7841-e67e-4156-bcd6-ae92a611a7d2" containerName="config-reloader" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.319868 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffcd7841-e67e-4156-bcd6-ae92a611a7d2" containerName="prometheus" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.319886 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffcd7841-e67e-4156-bcd6-ae92a611a7d2" containerName="thanos-sidecar" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.321799 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.329723 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.329936 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.330218 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.330383 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.330842 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.331029 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.331637 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.344222 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-pv5nd" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.351168 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.357873 4955 scope.go:117] "RemoveContainer" containerID="f42e04e86d9f1fb67d30aa799dfd84cdfcd050682a0b69f6dabaad072ba5f3c2" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.363357 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.409506 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/07183068-2128-4ee0-b096-e7fb694512b7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.409636 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/07183068-2128-4ee0-b096-e7fb694512b7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.409668 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.409706 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.409727 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/07183068-2128-4ee0-b096-e7fb694512b7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.409746 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2czjm\" (UniqueName: \"kubernetes.io/projected/07183068-2128-4ee0-b096-e7fb694512b7-kube-api-access-2czjm\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.409772 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/07183068-2128-4ee0-b096-e7fb694512b7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.409797 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.409825 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/07183068-2128-4ee0-b096-e7fb694512b7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.409918 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.409957 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.409984 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-config\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.410010 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.417279 4955 scope.go:117] "RemoveContainer" containerID="f42e04e86d9f1fb67d30aa799dfd84cdfcd050682a0b69f6dabaad072ba5f3c2" Feb 02 13:50:37 crc kubenswrapper[4955]: E0202 13:50:37.417639 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f42e04e86d9f1fb67d30aa799dfd84cdfcd050682a0b69f6dabaad072ba5f3c2\": container with ID starting with f42e04e86d9f1fb67d30aa799dfd84cdfcd050682a0b69f6dabaad072ba5f3c2 not found: ID does not exist" containerID="f42e04e86d9f1fb67d30aa799dfd84cdfcd050682a0b69f6dabaad072ba5f3c2" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.417731 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f42e04e86d9f1fb67d30aa799dfd84cdfcd050682a0b69f6dabaad072ba5f3c2"} err="failed to get container status \"f42e04e86d9f1fb67d30aa799dfd84cdfcd050682a0b69f6dabaad072ba5f3c2\": rpc error: code = NotFound desc = could not find container \"f42e04e86d9f1fb67d30aa799dfd84cdfcd050682a0b69f6dabaad072ba5f3c2\": container with ID starting with f42e04e86d9f1fb67d30aa799dfd84cdfcd050682a0b69f6dabaad072ba5f3c2 not found: ID does not exist" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.511895 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/07183068-2128-4ee0-b096-e7fb694512b7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.511979 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/07183068-2128-4ee0-b096-e7fb694512b7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.512013 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.512060 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.512089 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2czjm\" (UniqueName: \"kubernetes.io/projected/07183068-2128-4ee0-b096-e7fb694512b7-kube-api-access-2czjm\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.512114 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/07183068-2128-4ee0-b096-e7fb694512b7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.512144 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/07183068-2128-4ee0-b096-e7fb694512b7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.512178 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.512212 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/07183068-2128-4ee0-b096-e7fb694512b7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.512317 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.512366 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.512402 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-config\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.512433 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.513400 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/07183068-2128-4ee0-b096-e7fb694512b7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.513662 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.513789 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/07183068-2128-4ee0-b096-e7fb694512b7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.513971 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/07183068-2128-4ee0-b096-e7fb694512b7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.523322 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/07183068-2128-4ee0-b096-e7fb694512b7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.523535 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.523800 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.524731 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.525140 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.525541 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.525572 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/07183068-2128-4ee0-b096-e7fb694512b7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.530347 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-config\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.534599 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2czjm\" (UniqueName: \"kubernetes.io/projected/07183068-2128-4ee0-b096-e7fb694512b7-kube-api-access-2czjm\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.552923 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"prometheus-metric-storage-0\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.658901 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.729463 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82e2ca6a-27a0-4777-92e8-eb25aa3e2b73" path="/var/lib/kubelet/pods/82e2ca6a-27a0-4777-92e8-eb25aa3e2b73/volumes" Feb 02 13:50:37 crc kubenswrapper[4955]: I0202 13:50:37.730579 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffcd7841-e67e-4156-bcd6-ae92a611a7d2" path="/var/lib/kubelet/pods/ffcd7841-e67e-4156-bcd6-ae92a611a7d2/volumes" Feb 02 13:50:38 crc kubenswrapper[4955]: I0202 13:50:38.105461 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:50:38 crc kubenswrapper[4955]: I0202 13:50:38.262750 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07183068-2128-4ee0-b096-e7fb694512b7","Type":"ContainerStarted","Data":"63bc64b4637560195a5a2f2dfdf58fe186703f5e86c7a635fc466cfa72a9935c"} Feb 02 13:50:41 crc kubenswrapper[4955]: I0202 13:50:41.293197 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07183068-2128-4ee0-b096-e7fb694512b7","Type":"ContainerStarted","Data":"842224d29c8f5428e0702c6e28f1425b813c163c3484ff9106ed797bd9a7cde8"} Feb 02 13:50:47 crc kubenswrapper[4955]: I0202 13:50:47.341871 4955 generic.go:334] "Generic (PLEG): container finished" podID="07183068-2128-4ee0-b096-e7fb694512b7" containerID="842224d29c8f5428e0702c6e28f1425b813c163c3484ff9106ed797bd9a7cde8" exitCode=0 Feb 02 13:50:47 crc kubenswrapper[4955]: I0202 13:50:47.341946 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07183068-2128-4ee0-b096-e7fb694512b7","Type":"ContainerDied","Data":"842224d29c8f5428e0702c6e28f1425b813c163c3484ff9106ed797bd9a7cde8"} Feb 02 13:50:47 crc kubenswrapper[4955]: I0202 13:50:47.716914 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:50:47 crc kubenswrapper[4955]: E0202 13:50:47.717497 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:50:48 crc kubenswrapper[4955]: I0202 13:50:48.352915 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07183068-2128-4ee0-b096-e7fb694512b7","Type":"ContainerStarted","Data":"d5b3a3a840e4e0d8b7dfe946eac3e428a182daf7ff26320a1af6376544227d84"} Feb 02 13:50:51 crc kubenswrapper[4955]: I0202 13:50:51.379102 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07183068-2128-4ee0-b096-e7fb694512b7","Type":"ContainerStarted","Data":"d924ff433fb9b379df99c12e431518d8eb23ceb5b5d76133c5fc78aa056d8f1d"} Feb 02 13:50:51 crc kubenswrapper[4955]: I0202 13:50:51.379591 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07183068-2128-4ee0-b096-e7fb694512b7","Type":"ContainerStarted","Data":"98cd142b1746633226bd64a8c7ffba8089e83290a862e9463e782ba3cc917210"} Feb 02 13:50:51 crc kubenswrapper[4955]: I0202 13:50:51.421579 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=14.421546727 podStartE2EDuration="14.421546727s" podCreationTimestamp="2026-02-02 13:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:50:51.413634093 +0000 UTC m=+2902.325970563" watchObservedRunningTime="2026-02-02 13:50:51.421546727 +0000 UTC m=+2902.333883177" Feb 02 13:50:52 crc kubenswrapper[4955]: I0202 13:50:52.659193 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:52 crc kubenswrapper[4955]: I0202 13:50:52.659508 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:52 crc kubenswrapper[4955]: I0202 13:50:52.666035 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:53 crc kubenswrapper[4955]: I0202 13:50:53.415196 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 02 13:50:59 crc kubenswrapper[4955]: I0202 13:50:59.724975 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:50:59 crc kubenswrapper[4955]: E0202 13:50:59.725839 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:51:13 crc kubenswrapper[4955]: I0202 13:51:13.716285 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:51:13 crc kubenswrapper[4955]: E0202 13:51:13.716978 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:51:28 crc kubenswrapper[4955]: I0202 13:51:28.716885 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:51:28 crc kubenswrapper[4955]: E0202 13:51:28.717789 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:51:42 crc kubenswrapper[4955]: I0202 13:51:42.716500 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:51:42 crc kubenswrapper[4955]: E0202 13:51:42.717355 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:51:54 crc kubenswrapper[4955]: I0202 13:51:54.715859 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:51:54 crc kubenswrapper[4955]: E0202 13:51:54.716660 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:52:05 crc kubenswrapper[4955]: I0202 13:52:05.689781 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q2548"] Feb 02 13:52:05 crc kubenswrapper[4955]: I0202 13:52:05.692879 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q2548" Feb 02 13:52:05 crc kubenswrapper[4955]: I0202 13:52:05.704789 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2548"] Feb 02 13:52:05 crc kubenswrapper[4955]: I0202 13:52:05.857422 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3e2eeab-c0f5-490f-8fb9-2874e7f661f6-catalog-content\") pod \"redhat-marketplace-q2548\" (UID: \"a3e2eeab-c0f5-490f-8fb9-2874e7f661f6\") " pod="openshift-marketplace/redhat-marketplace-q2548" Feb 02 13:52:05 crc kubenswrapper[4955]: I0202 13:52:05.857706 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fq54\" (UniqueName: \"kubernetes.io/projected/a3e2eeab-c0f5-490f-8fb9-2874e7f661f6-kube-api-access-5fq54\") pod \"redhat-marketplace-q2548\" (UID: \"a3e2eeab-c0f5-490f-8fb9-2874e7f661f6\") " pod="openshift-marketplace/redhat-marketplace-q2548" Feb 02 13:52:05 crc kubenswrapper[4955]: I0202 13:52:05.858715 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3e2eeab-c0f5-490f-8fb9-2874e7f661f6-utilities\") pod \"redhat-marketplace-q2548\" (UID: \"a3e2eeab-c0f5-490f-8fb9-2874e7f661f6\") " pod="openshift-marketplace/redhat-marketplace-q2548" Feb 02 13:52:05 crc kubenswrapper[4955]: I0202 13:52:05.960940 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3e2eeab-c0f5-490f-8fb9-2874e7f661f6-utilities\") pod \"redhat-marketplace-q2548\" (UID: \"a3e2eeab-c0f5-490f-8fb9-2874e7f661f6\") " pod="openshift-marketplace/redhat-marketplace-q2548" Feb 02 13:52:05 crc kubenswrapper[4955]: I0202 13:52:05.961050 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3e2eeab-c0f5-490f-8fb9-2874e7f661f6-catalog-content\") pod \"redhat-marketplace-q2548\" (UID: \"a3e2eeab-c0f5-490f-8fb9-2874e7f661f6\") " pod="openshift-marketplace/redhat-marketplace-q2548" Feb 02 13:52:05 crc kubenswrapper[4955]: I0202 13:52:05.961134 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fq54\" (UniqueName: \"kubernetes.io/projected/a3e2eeab-c0f5-490f-8fb9-2874e7f661f6-kube-api-access-5fq54\") pod \"redhat-marketplace-q2548\" (UID: \"a3e2eeab-c0f5-490f-8fb9-2874e7f661f6\") " pod="openshift-marketplace/redhat-marketplace-q2548" Feb 02 13:52:05 crc kubenswrapper[4955]: I0202 13:52:05.961636 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3e2eeab-c0f5-490f-8fb9-2874e7f661f6-catalog-content\") pod \"redhat-marketplace-q2548\" (UID: \"a3e2eeab-c0f5-490f-8fb9-2874e7f661f6\") " pod="openshift-marketplace/redhat-marketplace-q2548" Feb 02 13:52:05 crc kubenswrapper[4955]: I0202 13:52:05.961653 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3e2eeab-c0f5-490f-8fb9-2874e7f661f6-utilities\") pod \"redhat-marketplace-q2548\" (UID: \"a3e2eeab-c0f5-490f-8fb9-2874e7f661f6\") " pod="openshift-marketplace/redhat-marketplace-q2548" Feb 02 13:52:05 crc kubenswrapper[4955]: I0202 13:52:05.979750 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fq54\" (UniqueName: \"kubernetes.io/projected/a3e2eeab-c0f5-490f-8fb9-2874e7f661f6-kube-api-access-5fq54\") pod \"redhat-marketplace-q2548\" (UID: \"a3e2eeab-c0f5-490f-8fb9-2874e7f661f6\") " pod="openshift-marketplace/redhat-marketplace-q2548" Feb 02 13:52:06 crc kubenswrapper[4955]: I0202 13:52:06.029781 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q2548" Feb 02 13:52:06 crc kubenswrapper[4955]: I0202 13:52:06.530055 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2548"] Feb 02 13:52:07 crc kubenswrapper[4955]: I0202 13:52:07.093290 4955 generic.go:334] "Generic (PLEG): container finished" podID="a3e2eeab-c0f5-490f-8fb9-2874e7f661f6" containerID="36b79d01a263f5dab369b5c11d3f1ee8921b975d3c91d347ba571c06b752de32" exitCode=0 Feb 02 13:52:07 crc kubenswrapper[4955]: I0202 13:52:07.093397 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2548" event={"ID":"a3e2eeab-c0f5-490f-8fb9-2874e7f661f6","Type":"ContainerDied","Data":"36b79d01a263f5dab369b5c11d3f1ee8921b975d3c91d347ba571c06b752de32"} Feb 02 13:52:07 crc kubenswrapper[4955]: I0202 13:52:07.093588 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2548" event={"ID":"a3e2eeab-c0f5-490f-8fb9-2874e7f661f6","Type":"ContainerStarted","Data":"074c20e297dca5385f19486ee3330ba788dbe5a8b0ac2c8442c6058f166701ef"} Feb 02 13:52:08 crc kubenswrapper[4955]: I0202 13:52:08.104760 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2548" event={"ID":"a3e2eeab-c0f5-490f-8fb9-2874e7f661f6","Type":"ContainerStarted","Data":"ee8ae38e45c1e529c5ae7941d813af58959ec4f8bff7dd3bb98d1d9c3e208a1d"} Feb 02 13:52:08 crc kubenswrapper[4955]: I0202 13:52:08.717198 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:52:08 crc kubenswrapper[4955]: E0202 13:52:08.717672 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:52:09 crc kubenswrapper[4955]: I0202 13:52:09.114660 4955 generic.go:334] "Generic (PLEG): container finished" podID="a3e2eeab-c0f5-490f-8fb9-2874e7f661f6" containerID="ee8ae38e45c1e529c5ae7941d813af58959ec4f8bff7dd3bb98d1d9c3e208a1d" exitCode=0 Feb 02 13:52:09 crc kubenswrapper[4955]: I0202 13:52:09.114711 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2548" event={"ID":"a3e2eeab-c0f5-490f-8fb9-2874e7f661f6","Type":"ContainerDied","Data":"ee8ae38e45c1e529c5ae7941d813af58959ec4f8bff7dd3bb98d1d9c3e208a1d"} Feb 02 13:52:10 crc kubenswrapper[4955]: I0202 13:52:10.123873 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2548" event={"ID":"a3e2eeab-c0f5-490f-8fb9-2874e7f661f6","Type":"ContainerStarted","Data":"9c8e50d05a54d3b5f42a6a97ee7920ea32259d5c829153a0905c24cd13bb54ee"} Feb 02 13:52:10 crc kubenswrapper[4955]: I0202 13:52:10.153805 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q2548" podStartSLOduration=2.711530603 podStartE2EDuration="5.153783389s" podCreationTimestamp="2026-02-02 13:52:05 +0000 UTC" firstStartedPulling="2026-02-02 13:52:07.095021028 +0000 UTC m=+2978.007357478" lastFinishedPulling="2026-02-02 13:52:09.537273814 +0000 UTC m=+2980.449610264" observedRunningTime="2026-02-02 13:52:10.143320553 +0000 UTC m=+2981.055657003" watchObservedRunningTime="2026-02-02 13:52:10.153783389 +0000 UTC m=+2981.066119839" Feb 02 13:52:16 crc kubenswrapper[4955]: I0202 13:52:16.030871 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q2548" Feb 02 13:52:16 crc kubenswrapper[4955]: I0202 13:52:16.031635 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q2548" Feb 02 13:52:16 crc kubenswrapper[4955]: I0202 13:52:16.080144 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q2548" Feb 02 13:52:16 crc kubenswrapper[4955]: I0202 13:52:16.215027 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q2548" Feb 02 13:52:16 crc kubenswrapper[4955]: I0202 13:52:16.318208 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2548"] Feb 02 13:52:18 crc kubenswrapper[4955]: I0202 13:52:18.193663 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q2548" podUID="a3e2eeab-c0f5-490f-8fb9-2874e7f661f6" containerName="registry-server" containerID="cri-o://9c8e50d05a54d3b5f42a6a97ee7920ea32259d5c829153a0905c24cd13bb54ee" gracePeriod=2 Feb 02 13:52:18 crc kubenswrapper[4955]: I0202 13:52:18.659304 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q2548" Feb 02 13:52:18 crc kubenswrapper[4955]: I0202 13:52:18.810168 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3e2eeab-c0f5-490f-8fb9-2874e7f661f6-catalog-content\") pod \"a3e2eeab-c0f5-490f-8fb9-2874e7f661f6\" (UID: \"a3e2eeab-c0f5-490f-8fb9-2874e7f661f6\") " Feb 02 13:52:18 crc kubenswrapper[4955]: I0202 13:52:18.810384 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fq54\" (UniqueName: \"kubernetes.io/projected/a3e2eeab-c0f5-490f-8fb9-2874e7f661f6-kube-api-access-5fq54\") pod \"a3e2eeab-c0f5-490f-8fb9-2874e7f661f6\" (UID: \"a3e2eeab-c0f5-490f-8fb9-2874e7f661f6\") " Feb 02 13:52:18 crc kubenswrapper[4955]: I0202 13:52:18.810424 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3e2eeab-c0f5-490f-8fb9-2874e7f661f6-utilities\") pod \"a3e2eeab-c0f5-490f-8fb9-2874e7f661f6\" (UID: \"a3e2eeab-c0f5-490f-8fb9-2874e7f661f6\") " Feb 02 13:52:18 crc kubenswrapper[4955]: I0202 13:52:18.811405 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3e2eeab-c0f5-490f-8fb9-2874e7f661f6-utilities" (OuterVolumeSpecName: "utilities") pod "a3e2eeab-c0f5-490f-8fb9-2874e7f661f6" (UID: "a3e2eeab-c0f5-490f-8fb9-2874e7f661f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:52:18 crc kubenswrapper[4955]: I0202 13:52:18.818258 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3e2eeab-c0f5-490f-8fb9-2874e7f661f6-kube-api-access-5fq54" (OuterVolumeSpecName: "kube-api-access-5fq54") pod "a3e2eeab-c0f5-490f-8fb9-2874e7f661f6" (UID: "a3e2eeab-c0f5-490f-8fb9-2874e7f661f6"). InnerVolumeSpecName "kube-api-access-5fq54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:52:18 crc kubenswrapper[4955]: I0202 13:52:18.831287 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3e2eeab-c0f5-490f-8fb9-2874e7f661f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3e2eeab-c0f5-490f-8fb9-2874e7f661f6" (UID: "a3e2eeab-c0f5-490f-8fb9-2874e7f661f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:52:18 crc kubenswrapper[4955]: I0202 13:52:18.913454 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3e2eeab-c0f5-490f-8fb9-2874e7f661f6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:52:18 crc kubenswrapper[4955]: I0202 13:52:18.913493 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fq54\" (UniqueName: \"kubernetes.io/projected/a3e2eeab-c0f5-490f-8fb9-2874e7f661f6-kube-api-access-5fq54\") on node \"crc\" DevicePath \"\"" Feb 02 13:52:18 crc kubenswrapper[4955]: I0202 13:52:18.913509 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3e2eeab-c0f5-490f-8fb9-2874e7f661f6-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:52:19 crc kubenswrapper[4955]: I0202 13:52:19.206164 4955 generic.go:334] "Generic (PLEG): container finished" podID="a3e2eeab-c0f5-490f-8fb9-2874e7f661f6" containerID="9c8e50d05a54d3b5f42a6a97ee7920ea32259d5c829153a0905c24cd13bb54ee" exitCode=0 Feb 02 13:52:19 crc kubenswrapper[4955]: I0202 13:52:19.206222 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q2548" Feb 02 13:52:19 crc kubenswrapper[4955]: I0202 13:52:19.206224 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2548" event={"ID":"a3e2eeab-c0f5-490f-8fb9-2874e7f661f6","Type":"ContainerDied","Data":"9c8e50d05a54d3b5f42a6a97ee7920ea32259d5c829153a0905c24cd13bb54ee"} Feb 02 13:52:19 crc kubenswrapper[4955]: I0202 13:52:19.206350 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2548" event={"ID":"a3e2eeab-c0f5-490f-8fb9-2874e7f661f6","Type":"ContainerDied","Data":"074c20e297dca5385f19486ee3330ba788dbe5a8b0ac2c8442c6058f166701ef"} Feb 02 13:52:19 crc kubenswrapper[4955]: I0202 13:52:19.206388 4955 scope.go:117] "RemoveContainer" containerID="9c8e50d05a54d3b5f42a6a97ee7920ea32259d5c829153a0905c24cd13bb54ee" Feb 02 13:52:19 crc kubenswrapper[4955]: I0202 13:52:19.241217 4955 scope.go:117] "RemoveContainer" containerID="ee8ae38e45c1e529c5ae7941d813af58959ec4f8bff7dd3bb98d1d9c3e208a1d" Feb 02 13:52:19 crc kubenswrapper[4955]: I0202 13:52:19.243381 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2548"] Feb 02 13:52:19 crc kubenswrapper[4955]: I0202 13:52:19.252200 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2548"] Feb 02 13:52:19 crc kubenswrapper[4955]: I0202 13:52:19.269605 4955 scope.go:117] "RemoveContainer" containerID="36b79d01a263f5dab369b5c11d3f1ee8921b975d3c91d347ba571c06b752de32" Feb 02 13:52:19 crc kubenswrapper[4955]: I0202 13:52:19.320932 4955 scope.go:117] "RemoveContainer" containerID="9c8e50d05a54d3b5f42a6a97ee7920ea32259d5c829153a0905c24cd13bb54ee" Feb 02 13:52:19 crc kubenswrapper[4955]: E0202 13:52:19.321369 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c8e50d05a54d3b5f42a6a97ee7920ea32259d5c829153a0905c24cd13bb54ee\": container with ID starting with 9c8e50d05a54d3b5f42a6a97ee7920ea32259d5c829153a0905c24cd13bb54ee not found: ID does not exist" containerID="9c8e50d05a54d3b5f42a6a97ee7920ea32259d5c829153a0905c24cd13bb54ee" Feb 02 13:52:19 crc kubenswrapper[4955]: I0202 13:52:19.321414 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c8e50d05a54d3b5f42a6a97ee7920ea32259d5c829153a0905c24cd13bb54ee"} err="failed to get container status \"9c8e50d05a54d3b5f42a6a97ee7920ea32259d5c829153a0905c24cd13bb54ee\": rpc error: code = NotFound desc = could not find container \"9c8e50d05a54d3b5f42a6a97ee7920ea32259d5c829153a0905c24cd13bb54ee\": container with ID starting with 9c8e50d05a54d3b5f42a6a97ee7920ea32259d5c829153a0905c24cd13bb54ee not found: ID does not exist" Feb 02 13:52:19 crc kubenswrapper[4955]: I0202 13:52:19.321444 4955 scope.go:117] "RemoveContainer" containerID="ee8ae38e45c1e529c5ae7941d813af58959ec4f8bff7dd3bb98d1d9c3e208a1d" Feb 02 13:52:19 crc kubenswrapper[4955]: E0202 13:52:19.321770 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee8ae38e45c1e529c5ae7941d813af58959ec4f8bff7dd3bb98d1d9c3e208a1d\": container with ID starting with ee8ae38e45c1e529c5ae7941d813af58959ec4f8bff7dd3bb98d1d9c3e208a1d not found: ID does not exist" containerID="ee8ae38e45c1e529c5ae7941d813af58959ec4f8bff7dd3bb98d1d9c3e208a1d" Feb 02 13:52:19 crc kubenswrapper[4955]: I0202 13:52:19.321817 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee8ae38e45c1e529c5ae7941d813af58959ec4f8bff7dd3bb98d1d9c3e208a1d"} err="failed to get container status \"ee8ae38e45c1e529c5ae7941d813af58959ec4f8bff7dd3bb98d1d9c3e208a1d\": rpc error: code = NotFound desc = could not find container \"ee8ae38e45c1e529c5ae7941d813af58959ec4f8bff7dd3bb98d1d9c3e208a1d\": container with ID starting with ee8ae38e45c1e529c5ae7941d813af58959ec4f8bff7dd3bb98d1d9c3e208a1d not found: ID does not exist" Feb 02 13:52:19 crc kubenswrapper[4955]: I0202 13:52:19.321842 4955 scope.go:117] "RemoveContainer" containerID="36b79d01a263f5dab369b5c11d3f1ee8921b975d3c91d347ba571c06b752de32" Feb 02 13:52:19 crc kubenswrapper[4955]: E0202 13:52:19.322059 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36b79d01a263f5dab369b5c11d3f1ee8921b975d3c91d347ba571c06b752de32\": container with ID starting with 36b79d01a263f5dab369b5c11d3f1ee8921b975d3c91d347ba571c06b752de32 not found: ID does not exist" containerID="36b79d01a263f5dab369b5c11d3f1ee8921b975d3c91d347ba571c06b752de32" Feb 02 13:52:19 crc kubenswrapper[4955]: I0202 13:52:19.322090 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36b79d01a263f5dab369b5c11d3f1ee8921b975d3c91d347ba571c06b752de32"} err="failed to get container status \"36b79d01a263f5dab369b5c11d3f1ee8921b975d3c91d347ba571c06b752de32\": rpc error: code = NotFound desc = could not find container \"36b79d01a263f5dab369b5c11d3f1ee8921b975d3c91d347ba571c06b752de32\": container with ID starting with 36b79d01a263f5dab369b5c11d3f1ee8921b975d3c91d347ba571c06b752de32 not found: ID does not exist" Feb 02 13:52:19 crc kubenswrapper[4955]: I0202 13:52:19.726968 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3e2eeab-c0f5-490f-8fb9-2874e7f661f6" path="/var/lib/kubelet/pods/a3e2eeab-c0f5-490f-8fb9-2874e7f661f6/volumes" Feb 02 13:52:22 crc kubenswrapper[4955]: I0202 13:52:22.716900 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:52:22 crc kubenswrapper[4955]: E0202 13:52:22.718260 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:52:34 crc kubenswrapper[4955]: I0202 13:52:34.716698 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:52:34 crc kubenswrapper[4955]: E0202 13:52:34.717587 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:52:36 crc kubenswrapper[4955]: I0202 13:52:36.070072 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx_0e5f1bee-07dd-4eaf-9a3b-328845abb141/manager/0.log" Feb 02 13:52:38 crc kubenswrapper[4955]: I0202 13:52:38.358467 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:52:38 crc kubenswrapper[4955]: I0202 13:52:38.359140 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="07183068-2128-4ee0-b096-e7fb694512b7" containerName="prometheus" containerID="cri-o://d5b3a3a840e4e0d8b7dfe946eac3e428a182daf7ff26320a1af6376544227d84" gracePeriod=600 Feb 02 13:52:38 crc kubenswrapper[4955]: I0202 13:52:38.359356 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="07183068-2128-4ee0-b096-e7fb694512b7" containerName="thanos-sidecar" containerID="cri-o://d924ff433fb9b379df99c12e431518d8eb23ceb5b5d76133c5fc78aa056d8f1d" gracePeriod=600 Feb 02 13:52:38 crc kubenswrapper[4955]: I0202 13:52:38.359569 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="07183068-2128-4ee0-b096-e7fb694512b7" containerName="config-reloader" containerID="cri-o://98cd142b1746633226bd64a8c7ffba8089e83290a862e9463e782ba3cc917210" gracePeriod=600 Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.348961 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.398022 4955 generic.go:334] "Generic (PLEG): container finished" podID="07183068-2128-4ee0-b096-e7fb694512b7" containerID="d924ff433fb9b379df99c12e431518d8eb23ceb5b5d76133c5fc78aa056d8f1d" exitCode=0 Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.399012 4955 generic.go:334] "Generic (PLEG): container finished" podID="07183068-2128-4ee0-b096-e7fb694512b7" containerID="98cd142b1746633226bd64a8c7ffba8089e83290a862e9463e782ba3cc917210" exitCode=0 Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.399085 4955 generic.go:334] "Generic (PLEG): container finished" podID="07183068-2128-4ee0-b096-e7fb694512b7" containerID="d5b3a3a840e4e0d8b7dfe946eac3e428a182daf7ff26320a1af6376544227d84" exitCode=0 Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.399154 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07183068-2128-4ee0-b096-e7fb694512b7","Type":"ContainerDied","Data":"d924ff433fb9b379df99c12e431518d8eb23ceb5b5d76133c5fc78aa056d8f1d"} Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.399240 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07183068-2128-4ee0-b096-e7fb694512b7","Type":"ContainerDied","Data":"98cd142b1746633226bd64a8c7ffba8089e83290a862e9463e782ba3cc917210"} Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.399206 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.399371 4955 scope.go:117] "RemoveContainer" containerID="d924ff433fb9b379df99c12e431518d8eb23ceb5b5d76133c5fc78aa056d8f1d" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.399305 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07183068-2128-4ee0-b096-e7fb694512b7","Type":"ContainerDied","Data":"d5b3a3a840e4e0d8b7dfe946eac3e428a182daf7ff26320a1af6376544227d84"} Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.399656 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07183068-2128-4ee0-b096-e7fb694512b7","Type":"ContainerDied","Data":"63bc64b4637560195a5a2f2dfdf58fe186703f5e86c7a635fc466cfa72a9935c"} Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.419541 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-config\") pod \"07183068-2128-4ee0-b096-e7fb694512b7\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.419633 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"07183068-2128-4ee0-b096-e7fb694512b7\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.419669 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/07183068-2128-4ee0-b096-e7fb694512b7-tls-assets\") pod \"07183068-2128-4ee0-b096-e7fb694512b7\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.419698 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/07183068-2128-4ee0-b096-e7fb694512b7-prometheus-metric-storage-rulefiles-2\") pod \"07183068-2128-4ee0-b096-e7fb694512b7\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.419720 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/07183068-2128-4ee0-b096-e7fb694512b7-config-out\") pod \"07183068-2128-4ee0-b096-e7fb694512b7\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.419805 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-thanos-prometheus-http-client-file\") pod \"07183068-2128-4ee0-b096-e7fb694512b7\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.419836 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-secret-combined-ca-bundle\") pod \"07183068-2128-4ee0-b096-e7fb694512b7\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.419892 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"07183068-2128-4ee0-b096-e7fb694512b7\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.419941 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2czjm\" (UniqueName: \"kubernetes.io/projected/07183068-2128-4ee0-b096-e7fb694512b7-kube-api-access-2czjm\") pod \"07183068-2128-4ee0-b096-e7fb694512b7\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.419963 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"07183068-2128-4ee0-b096-e7fb694512b7\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.420019 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/07183068-2128-4ee0-b096-e7fb694512b7-prometheus-metric-storage-rulefiles-1\") pod \"07183068-2128-4ee0-b096-e7fb694512b7\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.420060 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-web-config\") pod \"07183068-2128-4ee0-b096-e7fb694512b7\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.420085 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/07183068-2128-4ee0-b096-e7fb694512b7-prometheus-metric-storage-rulefiles-0\") pod \"07183068-2128-4ee0-b096-e7fb694512b7\" (UID: \"07183068-2128-4ee0-b096-e7fb694512b7\") " Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.422186 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07183068-2128-4ee0-b096-e7fb694512b7-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "07183068-2128-4ee0-b096-e7fb694512b7" (UID: "07183068-2128-4ee0-b096-e7fb694512b7"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.422346 4955 scope.go:117] "RemoveContainer" containerID="98cd142b1746633226bd64a8c7ffba8089e83290a862e9463e782ba3cc917210" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.423860 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07183068-2128-4ee0-b096-e7fb694512b7-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "07183068-2128-4ee0-b096-e7fb694512b7" (UID: "07183068-2128-4ee0-b096-e7fb694512b7"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.425076 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07183068-2128-4ee0-b096-e7fb694512b7-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "07183068-2128-4ee0-b096-e7fb694512b7" (UID: "07183068-2128-4ee0-b096-e7fb694512b7"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.429191 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "07183068-2128-4ee0-b096-e7fb694512b7" (UID: "07183068-2128-4ee0-b096-e7fb694512b7"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.429524 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07183068-2128-4ee0-b096-e7fb694512b7-config-out" (OuterVolumeSpecName: "config-out") pod "07183068-2128-4ee0-b096-e7fb694512b7" (UID: "07183068-2128-4ee0-b096-e7fb694512b7"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.430587 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "07183068-2128-4ee0-b096-e7fb694512b7" (UID: "07183068-2128-4ee0-b096-e7fb694512b7"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.432597 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "07183068-2128-4ee0-b096-e7fb694512b7" (UID: "07183068-2128-4ee0-b096-e7fb694512b7"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.433432 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07183068-2128-4ee0-b096-e7fb694512b7-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "07183068-2128-4ee0-b096-e7fb694512b7" (UID: "07183068-2128-4ee0-b096-e7fb694512b7"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.433967 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07183068-2128-4ee0-b096-e7fb694512b7-kube-api-access-2czjm" (OuterVolumeSpecName: "kube-api-access-2czjm") pod "07183068-2128-4ee0-b096-e7fb694512b7" (UID: "07183068-2128-4ee0-b096-e7fb694512b7"). InnerVolumeSpecName "kube-api-access-2czjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.435957 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-config" (OuterVolumeSpecName: "config") pod "07183068-2128-4ee0-b096-e7fb694512b7" (UID: "07183068-2128-4ee0-b096-e7fb694512b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.438726 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "07183068-2128-4ee0-b096-e7fb694512b7" (UID: "07183068-2128-4ee0-b096-e7fb694512b7"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.438856 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "07183068-2128-4ee0-b096-e7fb694512b7" (UID: "07183068-2128-4ee0-b096-e7fb694512b7"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.509523 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-web-config" (OuterVolumeSpecName: "web-config") pod "07183068-2128-4ee0-b096-e7fb694512b7" (UID: "07183068-2128-4ee0-b096-e7fb694512b7"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.522893 4955 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/07183068-2128-4ee0-b096-e7fb694512b7-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.522924 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.522967 4955 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.522982 4955 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/07183068-2128-4ee0-b096-e7fb694512b7-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.522995 4955 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/07183068-2128-4ee0-b096-e7fb694512b7-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.523009 4955 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/07183068-2128-4ee0-b096-e7fb694512b7-config-out\") on node \"crc\" DevicePath \"\"" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.523022 4955 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.523034 4955 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.523050 4955 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.523064 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2czjm\" (UniqueName: \"kubernetes.io/projected/07183068-2128-4ee0-b096-e7fb694512b7-kube-api-access-2czjm\") on node \"crc\" DevicePath \"\"" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.523078 4955 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.523091 4955 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/07183068-2128-4ee0-b096-e7fb694512b7-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.523105 4955 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/07183068-2128-4ee0-b096-e7fb694512b7-web-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.543398 4955 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.580568 4955 scope.go:117] "RemoveContainer" containerID="d5b3a3a840e4e0d8b7dfe946eac3e428a182daf7ff26320a1af6376544227d84" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.598168 4955 scope.go:117] "RemoveContainer" containerID="842224d29c8f5428e0702c6e28f1425b813c163c3484ff9106ed797bd9a7cde8" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.619037 4955 scope.go:117] "RemoveContainer" containerID="d924ff433fb9b379df99c12e431518d8eb23ceb5b5d76133c5fc78aa056d8f1d" Feb 02 13:52:39 crc kubenswrapper[4955]: E0202 13:52:39.619546 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d924ff433fb9b379df99c12e431518d8eb23ceb5b5d76133c5fc78aa056d8f1d\": container with ID starting with d924ff433fb9b379df99c12e431518d8eb23ceb5b5d76133c5fc78aa056d8f1d not found: ID does not exist" containerID="d924ff433fb9b379df99c12e431518d8eb23ceb5b5d76133c5fc78aa056d8f1d" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.619620 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d924ff433fb9b379df99c12e431518d8eb23ceb5b5d76133c5fc78aa056d8f1d"} err="failed to get container status \"d924ff433fb9b379df99c12e431518d8eb23ceb5b5d76133c5fc78aa056d8f1d\": rpc error: code = NotFound desc = could not find container \"d924ff433fb9b379df99c12e431518d8eb23ceb5b5d76133c5fc78aa056d8f1d\": container with ID starting with d924ff433fb9b379df99c12e431518d8eb23ceb5b5d76133c5fc78aa056d8f1d not found: ID does not exist" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.619654 4955 scope.go:117] "RemoveContainer" containerID="98cd142b1746633226bd64a8c7ffba8089e83290a862e9463e782ba3cc917210" Feb 02 13:52:39 crc kubenswrapper[4955]: E0202 13:52:39.620003 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98cd142b1746633226bd64a8c7ffba8089e83290a862e9463e782ba3cc917210\": container with ID starting with 98cd142b1746633226bd64a8c7ffba8089e83290a862e9463e782ba3cc917210 not found: ID does not exist" containerID="98cd142b1746633226bd64a8c7ffba8089e83290a862e9463e782ba3cc917210" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.620042 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98cd142b1746633226bd64a8c7ffba8089e83290a862e9463e782ba3cc917210"} err="failed to get container status \"98cd142b1746633226bd64a8c7ffba8089e83290a862e9463e782ba3cc917210\": rpc error: code = NotFound desc = could not find container \"98cd142b1746633226bd64a8c7ffba8089e83290a862e9463e782ba3cc917210\": container with ID starting with 98cd142b1746633226bd64a8c7ffba8089e83290a862e9463e782ba3cc917210 not found: ID does not exist" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.620069 4955 scope.go:117] "RemoveContainer" containerID="d5b3a3a840e4e0d8b7dfe946eac3e428a182daf7ff26320a1af6376544227d84" Feb 02 13:52:39 crc kubenswrapper[4955]: E0202 13:52:39.620305 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5b3a3a840e4e0d8b7dfe946eac3e428a182daf7ff26320a1af6376544227d84\": container with ID starting with d5b3a3a840e4e0d8b7dfe946eac3e428a182daf7ff26320a1af6376544227d84 not found: ID does not exist" containerID="d5b3a3a840e4e0d8b7dfe946eac3e428a182daf7ff26320a1af6376544227d84" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.620329 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5b3a3a840e4e0d8b7dfe946eac3e428a182daf7ff26320a1af6376544227d84"} err="failed to get container status \"d5b3a3a840e4e0d8b7dfe946eac3e428a182daf7ff26320a1af6376544227d84\": rpc error: code = NotFound desc = could not find container \"d5b3a3a840e4e0d8b7dfe946eac3e428a182daf7ff26320a1af6376544227d84\": container with ID starting with d5b3a3a840e4e0d8b7dfe946eac3e428a182daf7ff26320a1af6376544227d84 not found: ID does not exist" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.620344 4955 scope.go:117] "RemoveContainer" containerID="842224d29c8f5428e0702c6e28f1425b813c163c3484ff9106ed797bd9a7cde8" Feb 02 13:52:39 crc kubenswrapper[4955]: E0202 13:52:39.620692 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"842224d29c8f5428e0702c6e28f1425b813c163c3484ff9106ed797bd9a7cde8\": container with ID starting with 842224d29c8f5428e0702c6e28f1425b813c163c3484ff9106ed797bd9a7cde8 not found: ID does not exist" containerID="842224d29c8f5428e0702c6e28f1425b813c163c3484ff9106ed797bd9a7cde8" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.620951 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"842224d29c8f5428e0702c6e28f1425b813c163c3484ff9106ed797bd9a7cde8"} err="failed to get container status \"842224d29c8f5428e0702c6e28f1425b813c163c3484ff9106ed797bd9a7cde8\": rpc error: code = NotFound desc = could not find container \"842224d29c8f5428e0702c6e28f1425b813c163c3484ff9106ed797bd9a7cde8\": container with ID starting with 842224d29c8f5428e0702c6e28f1425b813c163c3484ff9106ed797bd9a7cde8 not found: ID does not exist" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.620982 4955 scope.go:117] "RemoveContainer" containerID="d924ff433fb9b379df99c12e431518d8eb23ceb5b5d76133c5fc78aa056d8f1d" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.621303 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d924ff433fb9b379df99c12e431518d8eb23ceb5b5d76133c5fc78aa056d8f1d"} err="failed to get container status \"d924ff433fb9b379df99c12e431518d8eb23ceb5b5d76133c5fc78aa056d8f1d\": rpc error: code = NotFound desc = could not find container \"d924ff433fb9b379df99c12e431518d8eb23ceb5b5d76133c5fc78aa056d8f1d\": container with ID starting with d924ff433fb9b379df99c12e431518d8eb23ceb5b5d76133c5fc78aa056d8f1d not found: ID does not exist" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.621327 4955 scope.go:117] "RemoveContainer" containerID="98cd142b1746633226bd64a8c7ffba8089e83290a862e9463e782ba3cc917210" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.621711 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98cd142b1746633226bd64a8c7ffba8089e83290a862e9463e782ba3cc917210"} err="failed to get container status \"98cd142b1746633226bd64a8c7ffba8089e83290a862e9463e782ba3cc917210\": rpc error: code = NotFound desc = could not find container \"98cd142b1746633226bd64a8c7ffba8089e83290a862e9463e782ba3cc917210\": container with ID starting with 98cd142b1746633226bd64a8c7ffba8089e83290a862e9463e782ba3cc917210 not found: ID does not exist" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.621730 4955 scope.go:117] "RemoveContainer" containerID="d5b3a3a840e4e0d8b7dfe946eac3e428a182daf7ff26320a1af6376544227d84" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.621961 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5b3a3a840e4e0d8b7dfe946eac3e428a182daf7ff26320a1af6376544227d84"} err="failed to get container status \"d5b3a3a840e4e0d8b7dfe946eac3e428a182daf7ff26320a1af6376544227d84\": rpc error: code = NotFound desc = could not find container \"d5b3a3a840e4e0d8b7dfe946eac3e428a182daf7ff26320a1af6376544227d84\": container with ID starting with d5b3a3a840e4e0d8b7dfe946eac3e428a182daf7ff26320a1af6376544227d84 not found: ID does not exist" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.621985 4955 scope.go:117] "RemoveContainer" containerID="842224d29c8f5428e0702c6e28f1425b813c163c3484ff9106ed797bd9a7cde8" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.622183 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"842224d29c8f5428e0702c6e28f1425b813c163c3484ff9106ed797bd9a7cde8"} err="failed to get container status \"842224d29c8f5428e0702c6e28f1425b813c163c3484ff9106ed797bd9a7cde8\": rpc error: code = NotFound desc = could not find container \"842224d29c8f5428e0702c6e28f1425b813c163c3484ff9106ed797bd9a7cde8\": container with ID starting with 842224d29c8f5428e0702c6e28f1425b813c163c3484ff9106ed797bd9a7cde8 not found: ID does not exist" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.622206 4955 scope.go:117] "RemoveContainer" containerID="d924ff433fb9b379df99c12e431518d8eb23ceb5b5d76133c5fc78aa056d8f1d" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.622627 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d924ff433fb9b379df99c12e431518d8eb23ceb5b5d76133c5fc78aa056d8f1d"} err="failed to get container status \"d924ff433fb9b379df99c12e431518d8eb23ceb5b5d76133c5fc78aa056d8f1d\": rpc error: code = NotFound desc = could not find container \"d924ff433fb9b379df99c12e431518d8eb23ceb5b5d76133c5fc78aa056d8f1d\": container with ID starting with d924ff433fb9b379df99c12e431518d8eb23ceb5b5d76133c5fc78aa056d8f1d not found: ID does not exist" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.622653 4955 scope.go:117] "RemoveContainer" containerID="98cd142b1746633226bd64a8c7ffba8089e83290a862e9463e782ba3cc917210" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.622954 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98cd142b1746633226bd64a8c7ffba8089e83290a862e9463e782ba3cc917210"} err="failed to get container status \"98cd142b1746633226bd64a8c7ffba8089e83290a862e9463e782ba3cc917210\": rpc error: code = NotFound desc = could not find container \"98cd142b1746633226bd64a8c7ffba8089e83290a862e9463e782ba3cc917210\": container with ID starting with 98cd142b1746633226bd64a8c7ffba8089e83290a862e9463e782ba3cc917210 not found: ID does not exist" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.623006 4955 scope.go:117] "RemoveContainer" containerID="d5b3a3a840e4e0d8b7dfe946eac3e428a182daf7ff26320a1af6376544227d84" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.623339 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5b3a3a840e4e0d8b7dfe946eac3e428a182daf7ff26320a1af6376544227d84"} err="failed to get container status \"d5b3a3a840e4e0d8b7dfe946eac3e428a182daf7ff26320a1af6376544227d84\": rpc error: code = NotFound desc = could not find container \"d5b3a3a840e4e0d8b7dfe946eac3e428a182daf7ff26320a1af6376544227d84\": container with ID starting with d5b3a3a840e4e0d8b7dfe946eac3e428a182daf7ff26320a1af6376544227d84 not found: ID does not exist" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.623372 4955 scope.go:117] "RemoveContainer" containerID="842224d29c8f5428e0702c6e28f1425b813c163c3484ff9106ed797bd9a7cde8" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.623693 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"842224d29c8f5428e0702c6e28f1425b813c163c3484ff9106ed797bd9a7cde8"} err="failed to get container status \"842224d29c8f5428e0702c6e28f1425b813c163c3484ff9106ed797bd9a7cde8\": rpc error: code = NotFound desc = could not find container \"842224d29c8f5428e0702c6e28f1425b813c163c3484ff9106ed797bd9a7cde8\": container with ID starting with 842224d29c8f5428e0702c6e28f1425b813c163c3484ff9106ed797bd9a7cde8 not found: ID does not exist" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.624688 4955 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.742023 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:52:39 crc kubenswrapper[4955]: I0202 13:52:39.754093 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.554840 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:52:40 crc kubenswrapper[4955]: E0202 13:52:40.555417 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07183068-2128-4ee0-b096-e7fb694512b7" containerName="init-config-reloader" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.555429 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="07183068-2128-4ee0-b096-e7fb694512b7" containerName="init-config-reloader" Feb 02 13:52:40 crc kubenswrapper[4955]: E0202 13:52:40.555441 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07183068-2128-4ee0-b096-e7fb694512b7" containerName="config-reloader" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.555448 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="07183068-2128-4ee0-b096-e7fb694512b7" containerName="config-reloader" Feb 02 13:52:40 crc kubenswrapper[4955]: E0202 13:52:40.555460 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07183068-2128-4ee0-b096-e7fb694512b7" containerName="prometheus" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.555466 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="07183068-2128-4ee0-b096-e7fb694512b7" containerName="prometheus" Feb 02 13:52:40 crc kubenswrapper[4955]: E0202 13:52:40.555474 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07183068-2128-4ee0-b096-e7fb694512b7" containerName="thanos-sidecar" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.555479 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="07183068-2128-4ee0-b096-e7fb694512b7" containerName="thanos-sidecar" Feb 02 13:52:40 crc kubenswrapper[4955]: E0202 13:52:40.555499 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3e2eeab-c0f5-490f-8fb9-2874e7f661f6" containerName="extract-content" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.555504 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3e2eeab-c0f5-490f-8fb9-2874e7f661f6" containerName="extract-content" Feb 02 13:52:40 crc kubenswrapper[4955]: E0202 13:52:40.555516 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3e2eeab-c0f5-490f-8fb9-2874e7f661f6" containerName="extract-utilities" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.555521 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3e2eeab-c0f5-490f-8fb9-2874e7f661f6" containerName="extract-utilities" Feb 02 13:52:40 crc kubenswrapper[4955]: E0202 13:52:40.555544 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3e2eeab-c0f5-490f-8fb9-2874e7f661f6" containerName="registry-server" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.555549 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3e2eeab-c0f5-490f-8fb9-2874e7f661f6" containerName="registry-server" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.555728 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="07183068-2128-4ee0-b096-e7fb694512b7" containerName="prometheus" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.555753 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="07183068-2128-4ee0-b096-e7fb694512b7" containerName="config-reloader" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.555759 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="07183068-2128-4ee0-b096-e7fb694512b7" containerName="thanos-sidecar" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.555767 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3e2eeab-c0f5-490f-8fb9-2874e7f661f6" containerName="registry-server" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.557655 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.559891 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.561669 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.561956 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.562204 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.562319 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.562413 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-pv5nd" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.562592 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.562763 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.568431 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.600321 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.652456 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-config\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.652508 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.652571 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.652618 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.652643 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.652743 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.653465 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.654126 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.654204 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.654570 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.654885 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.655061 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.655113 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqpnz\" (UniqueName: \"kubernetes.io/projected/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-kube-api-access-qqpnz\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.757687 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.757768 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.757798 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.757843 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqpnz\" (UniqueName: \"kubernetes.io/projected/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-kube-api-access-qqpnz\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.757900 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-config\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.757933 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.757981 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.758042 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.758078 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.758155 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.758194 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.758250 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.758279 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.759180 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.759779 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.760323 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.761210 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.765386 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.767403 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.769046 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.771258 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.772532 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.774243 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.775506 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-config\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.775804 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqpnz\" (UniqueName: \"kubernetes.io/projected/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-kube-api-access-qqpnz\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.780169 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:40 crc kubenswrapper[4955]: I0202 13:52:40.895951 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:41 crc kubenswrapper[4955]: I0202 13:52:41.513939 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:52:41 crc kubenswrapper[4955]: I0202 13:52:41.738650 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07183068-2128-4ee0-b096-e7fb694512b7" path="/var/lib/kubelet/pods/07183068-2128-4ee0-b096-e7fb694512b7/volumes" Feb 02 13:52:42 crc kubenswrapper[4955]: I0202 13:52:42.438755 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2fc665e1-000e-4aa8-b5e6-08b153ab82c3","Type":"ContainerStarted","Data":"371c5b469f9e1e44338ef48a169f8f20c21847bc92baa5f5480d71bef5fcdacd"} Feb 02 13:52:45 crc kubenswrapper[4955]: I0202 13:52:45.472532 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2fc665e1-000e-4aa8-b5e6-08b153ab82c3","Type":"ContainerStarted","Data":"dc1fcb7a31e0a9036bf3648866ff9ba4e1c4032666b36eb4219ea84b30e94d91"} Feb 02 13:52:49 crc kubenswrapper[4955]: I0202 13:52:49.727391 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:52:49 crc kubenswrapper[4955]: E0202 13:52:49.728304 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:52:51 crc kubenswrapper[4955]: I0202 13:52:51.538149 4955 generic.go:334] "Generic (PLEG): container finished" podID="2fc665e1-000e-4aa8-b5e6-08b153ab82c3" containerID="dc1fcb7a31e0a9036bf3648866ff9ba4e1c4032666b36eb4219ea84b30e94d91" exitCode=0 Feb 02 13:52:51 crc kubenswrapper[4955]: I0202 13:52:51.538258 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2fc665e1-000e-4aa8-b5e6-08b153ab82c3","Type":"ContainerDied","Data":"dc1fcb7a31e0a9036bf3648866ff9ba4e1c4032666b36eb4219ea84b30e94d91"} Feb 02 13:52:52 crc kubenswrapper[4955]: I0202 13:52:52.550332 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2fc665e1-000e-4aa8-b5e6-08b153ab82c3","Type":"ContainerStarted","Data":"6584d9849250eb322e4bab36475d055d3e9a8256a7b13e26745cb7fc8fbc29ff"} Feb 02 13:52:55 crc kubenswrapper[4955]: I0202 13:52:55.578350 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2fc665e1-000e-4aa8-b5e6-08b153ab82c3","Type":"ContainerStarted","Data":"eb4d85218d6a88798d69ed53171a2c219741b239f9a3c5d14a857759663c0664"} Feb 02 13:52:55 crc kubenswrapper[4955]: I0202 13:52:55.578962 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2fc665e1-000e-4aa8-b5e6-08b153ab82c3","Type":"ContainerStarted","Data":"bd0c84d96a0adc55237201ea0fa51744b80262c5c4af413dafa78ba5e505999c"} Feb 02 13:52:55 crc kubenswrapper[4955]: I0202 13:52:55.614374 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=15.614356254 podStartE2EDuration="15.614356254s" podCreationTimestamp="2026-02-02 13:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:52:55.608173652 +0000 UTC m=+3026.520510122" watchObservedRunningTime="2026-02-02 13:52:55.614356254 +0000 UTC m=+3026.526692704" Feb 02 13:52:55 crc kubenswrapper[4955]: I0202 13:52:55.897577 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:55 crc kubenswrapper[4955]: I0202 13:52:55.897679 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:55 crc kubenswrapper[4955]: I0202 13:52:55.903220 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 02 13:52:56 crc kubenswrapper[4955]: I0202 13:52:56.593428 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 02 13:53:04 crc kubenswrapper[4955]: I0202 13:53:04.717544 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:53:04 crc kubenswrapper[4955]: E0202 13:53:04.718407 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:53:17 crc kubenswrapper[4955]: I0202 13:53:17.716534 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:53:17 crc kubenswrapper[4955]: E0202 13:53:17.717273 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:53:32 crc kubenswrapper[4955]: I0202 13:53:32.716976 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:53:32 crc kubenswrapper[4955]: E0202 13:53:32.718303 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:53:43 crc kubenswrapper[4955]: I0202 13:53:43.716528 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:53:43 crc kubenswrapper[4955]: E0202 13:53:43.717235 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:53:55 crc kubenswrapper[4955]: I0202 13:53:55.716983 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:53:55 crc kubenswrapper[4955]: E0202 13:53:55.717812 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:54:06 crc kubenswrapper[4955]: I0202 13:54:06.715927 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:54:06 crc kubenswrapper[4955]: E0202 13:54:06.716666 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:54:17 crc kubenswrapper[4955]: I0202 13:54:17.717447 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:54:17 crc kubenswrapper[4955]: E0202 13:54:17.718358 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:54:31 crc kubenswrapper[4955]: I0202 13:54:31.717092 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:54:31 crc kubenswrapper[4955]: E0202 13:54:31.719170 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:54:43 crc kubenswrapper[4955]: I0202 13:54:43.716860 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:54:43 crc kubenswrapper[4955]: E0202 13:54:43.717639 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:54:58 crc kubenswrapper[4955]: I0202 13:54:58.717914 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:54:58 crc kubenswrapper[4955]: E0202 13:54:58.719176 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:55:06 crc kubenswrapper[4955]: I0202 13:55:06.042403 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-a2bb-account-create-update-b25nd"] Feb 02 13:55:06 crc kubenswrapper[4955]: I0202 13:55:06.051844 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-chb64"] Feb 02 13:55:06 crc kubenswrapper[4955]: I0202 13:55:06.063497 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-a2bb-account-create-update-b25nd"] Feb 02 13:55:06 crc kubenswrapper[4955]: I0202 13:55:06.073089 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-chb64"] Feb 02 13:55:07 crc kubenswrapper[4955]: I0202 13:55:07.729163 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a84457b-b28a-497d-904f-6ba8d1dbd8b1" path="/var/lib/kubelet/pods/4a84457b-b28a-497d-904f-6ba8d1dbd8b1/volumes" Feb 02 13:55:07 crc kubenswrapper[4955]: I0202 13:55:07.730173 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f721ccf9-9b3f-412e-86ec-13501bf80899" path="/var/lib/kubelet/pods/f721ccf9-9b3f-412e-86ec-13501bf80899/volumes" Feb 02 13:55:11 crc kubenswrapper[4955]: I0202 13:55:11.716575 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:55:11 crc kubenswrapper[4955]: E0202 13:55:11.717082 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:55:24 crc kubenswrapper[4955]: I0202 13:55:24.716904 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:55:24 crc kubenswrapper[4955]: E0202 13:55:24.717825 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 13:55:35 crc kubenswrapper[4955]: I0202 13:55:35.716501 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:55:36 crc kubenswrapper[4955]: I0202 13:55:36.022940 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerStarted","Data":"a885561f2dbf1d8ee032bf28e04cb51494deebc8003c906669492185d54028be"} Feb 02 13:55:41 crc kubenswrapper[4955]: I0202 13:55:41.263922 4955 scope.go:117] "RemoveContainer" containerID="05b536f76890245fb716e87d9e5ad75f20aa87dbd9941c63c5f4216a95d40233" Feb 02 13:55:41 crc kubenswrapper[4955]: I0202 13:55:41.290188 4955 scope.go:117] "RemoveContainer" containerID="06ce090717891cfe6f34fa9d77b2d5f3c4ddd84022c70448a7ffcae4ff08f496" Feb 02 13:56:37 crc kubenswrapper[4955]: I0202 13:56:37.759137 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx_0e5f1bee-07dd-4eaf-9a3b-328845abb141/manager/0.log" Feb 02 13:57:02 crc kubenswrapper[4955]: I0202 13:57:02.860859 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kjvrf"] Feb 02 13:57:02 crc kubenswrapper[4955]: I0202 13:57:02.863772 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjvrf" Feb 02 13:57:02 crc kubenswrapper[4955]: I0202 13:57:02.872619 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kjvrf"] Feb 02 13:57:03 crc kubenswrapper[4955]: I0202 13:57:03.015261 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3-utilities\") pod \"certified-operators-kjvrf\" (UID: \"131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3\") " pod="openshift-marketplace/certified-operators-kjvrf" Feb 02 13:57:03 crc kubenswrapper[4955]: I0202 13:57:03.015312 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8xcm\" (UniqueName: \"kubernetes.io/projected/131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3-kube-api-access-j8xcm\") pod \"certified-operators-kjvrf\" (UID: \"131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3\") " pod="openshift-marketplace/certified-operators-kjvrf" Feb 02 13:57:03 crc kubenswrapper[4955]: I0202 13:57:03.015377 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3-catalog-content\") pod \"certified-operators-kjvrf\" (UID: \"131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3\") " pod="openshift-marketplace/certified-operators-kjvrf" Feb 02 13:57:03 crc kubenswrapper[4955]: I0202 13:57:03.117465 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3-utilities\") pod \"certified-operators-kjvrf\" (UID: \"131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3\") " pod="openshift-marketplace/certified-operators-kjvrf" Feb 02 13:57:03 crc kubenswrapper[4955]: I0202 13:57:03.117516 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8xcm\" (UniqueName: \"kubernetes.io/projected/131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3-kube-api-access-j8xcm\") pod \"certified-operators-kjvrf\" (UID: \"131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3\") " pod="openshift-marketplace/certified-operators-kjvrf" Feb 02 13:57:03 crc kubenswrapper[4955]: I0202 13:57:03.117587 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3-catalog-content\") pod \"certified-operators-kjvrf\" (UID: \"131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3\") " pod="openshift-marketplace/certified-operators-kjvrf" Feb 02 13:57:03 crc kubenswrapper[4955]: I0202 13:57:03.118129 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3-utilities\") pod \"certified-operators-kjvrf\" (UID: \"131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3\") " pod="openshift-marketplace/certified-operators-kjvrf" Feb 02 13:57:03 crc kubenswrapper[4955]: I0202 13:57:03.118139 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3-catalog-content\") pod \"certified-operators-kjvrf\" (UID: \"131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3\") " pod="openshift-marketplace/certified-operators-kjvrf" Feb 02 13:57:03 crc kubenswrapper[4955]: I0202 13:57:03.144968 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8xcm\" (UniqueName: \"kubernetes.io/projected/131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3-kube-api-access-j8xcm\") pod \"certified-operators-kjvrf\" (UID: \"131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3\") " pod="openshift-marketplace/certified-operators-kjvrf" Feb 02 13:57:03 crc kubenswrapper[4955]: I0202 13:57:03.192431 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjvrf" Feb 02 13:57:03 crc kubenswrapper[4955]: I0202 13:57:03.702393 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kjvrf"] Feb 02 13:57:03 crc kubenswrapper[4955]: I0202 13:57:03.823687 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjvrf" event={"ID":"131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3","Type":"ContainerStarted","Data":"493706792ad55b634a483e2251b01eb3c83b32fd169539056162644f7221b4ac"} Feb 02 13:57:03 crc kubenswrapper[4955]: I0202 13:57:03.860304 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qgjnb"] Feb 02 13:57:03 crc kubenswrapper[4955]: I0202 13:57:03.864323 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgjnb" Feb 02 13:57:03 crc kubenswrapper[4955]: I0202 13:57:03.872132 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qgjnb"] Feb 02 13:57:03 crc kubenswrapper[4955]: I0202 13:57:03.940742 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9553e41-1810-4447-8dbe-559e6fe29a74-catalog-content\") pod \"community-operators-qgjnb\" (UID: \"d9553e41-1810-4447-8dbe-559e6fe29a74\") " pod="openshift-marketplace/community-operators-qgjnb" Feb 02 13:57:03 crc kubenswrapper[4955]: I0202 13:57:03.940806 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9553e41-1810-4447-8dbe-559e6fe29a74-utilities\") pod \"community-operators-qgjnb\" (UID: \"d9553e41-1810-4447-8dbe-559e6fe29a74\") " pod="openshift-marketplace/community-operators-qgjnb" Feb 02 13:57:03 crc kubenswrapper[4955]: I0202 13:57:03.940964 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tst9s\" (UniqueName: \"kubernetes.io/projected/d9553e41-1810-4447-8dbe-559e6fe29a74-kube-api-access-tst9s\") pod \"community-operators-qgjnb\" (UID: \"d9553e41-1810-4447-8dbe-559e6fe29a74\") " pod="openshift-marketplace/community-operators-qgjnb" Feb 02 13:57:04 crc kubenswrapper[4955]: I0202 13:57:04.043306 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tst9s\" (UniqueName: \"kubernetes.io/projected/d9553e41-1810-4447-8dbe-559e6fe29a74-kube-api-access-tst9s\") pod \"community-operators-qgjnb\" (UID: \"d9553e41-1810-4447-8dbe-559e6fe29a74\") " pod="openshift-marketplace/community-operators-qgjnb" Feb 02 13:57:04 crc kubenswrapper[4955]: I0202 13:57:04.043679 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9553e41-1810-4447-8dbe-559e6fe29a74-catalog-content\") pod \"community-operators-qgjnb\" (UID: \"d9553e41-1810-4447-8dbe-559e6fe29a74\") " pod="openshift-marketplace/community-operators-qgjnb" Feb 02 13:57:04 crc kubenswrapper[4955]: I0202 13:57:04.043855 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9553e41-1810-4447-8dbe-559e6fe29a74-utilities\") pod \"community-operators-qgjnb\" (UID: \"d9553e41-1810-4447-8dbe-559e6fe29a74\") " pod="openshift-marketplace/community-operators-qgjnb" Feb 02 13:57:04 crc kubenswrapper[4955]: I0202 13:57:04.044263 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9553e41-1810-4447-8dbe-559e6fe29a74-catalog-content\") pod \"community-operators-qgjnb\" (UID: \"d9553e41-1810-4447-8dbe-559e6fe29a74\") " pod="openshift-marketplace/community-operators-qgjnb" Feb 02 13:57:04 crc kubenswrapper[4955]: I0202 13:57:04.044471 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9553e41-1810-4447-8dbe-559e6fe29a74-utilities\") pod \"community-operators-qgjnb\" (UID: \"d9553e41-1810-4447-8dbe-559e6fe29a74\") " pod="openshift-marketplace/community-operators-qgjnb" Feb 02 13:57:04 crc kubenswrapper[4955]: I0202 13:57:04.064466 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tst9s\" (UniqueName: \"kubernetes.io/projected/d9553e41-1810-4447-8dbe-559e6fe29a74-kube-api-access-tst9s\") pod \"community-operators-qgjnb\" (UID: \"d9553e41-1810-4447-8dbe-559e6fe29a74\") " pod="openshift-marketplace/community-operators-qgjnb" Feb 02 13:57:04 crc kubenswrapper[4955]: I0202 13:57:04.185531 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgjnb" Feb 02 13:57:04 crc kubenswrapper[4955]: I0202 13:57:04.515145 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qgjnb"] Feb 02 13:57:04 crc kubenswrapper[4955]: I0202 13:57:04.832860 4955 generic.go:334] "Generic (PLEG): container finished" podID="d9553e41-1810-4447-8dbe-559e6fe29a74" containerID="2dfc0c6b14d0323336544319e54938d57524e0494d802ad0bdd18500125395bc" exitCode=0 Feb 02 13:57:04 crc kubenswrapper[4955]: I0202 13:57:04.832919 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgjnb" event={"ID":"d9553e41-1810-4447-8dbe-559e6fe29a74","Type":"ContainerDied","Data":"2dfc0c6b14d0323336544319e54938d57524e0494d802ad0bdd18500125395bc"} Feb 02 13:57:04 crc kubenswrapper[4955]: I0202 13:57:04.832963 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgjnb" event={"ID":"d9553e41-1810-4447-8dbe-559e6fe29a74","Type":"ContainerStarted","Data":"23631c221bb6ff881e9f9357312593250f3bea6f3add3606d05bf7b47c18d18d"} Feb 02 13:57:04 crc kubenswrapper[4955]: I0202 13:57:04.834506 4955 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 13:57:04 crc kubenswrapper[4955]: I0202 13:57:04.837160 4955 generic.go:334] "Generic (PLEG): container finished" podID="131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3" containerID="4a68cf3883ae2857cbe2c7a2241ae1ee1fe39e57daff1a9546382b25946573a8" exitCode=0 Feb 02 13:57:04 crc kubenswrapper[4955]: I0202 13:57:04.837182 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjvrf" event={"ID":"131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3","Type":"ContainerDied","Data":"4a68cf3883ae2857cbe2c7a2241ae1ee1fe39e57daff1a9546382b25946573a8"} Feb 02 13:57:06 crc kubenswrapper[4955]: I0202 13:57:06.856756 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgjnb" event={"ID":"d9553e41-1810-4447-8dbe-559e6fe29a74","Type":"ContainerStarted","Data":"db5da6b822c087fd45fde67c17124ab53c61e84211fdf064764f1b242f3fe7b0"} Feb 02 13:57:06 crc kubenswrapper[4955]: I0202 13:57:06.858959 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjvrf" event={"ID":"131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3","Type":"ContainerStarted","Data":"1a38ebdf503f20a6ac081d080fb796c5c0a005d9fd7216ed6c0c775235e83a18"} Feb 02 13:57:08 crc kubenswrapper[4955]: I0202 13:57:08.882445 4955 generic.go:334] "Generic (PLEG): container finished" podID="131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3" containerID="1a38ebdf503f20a6ac081d080fb796c5c0a005d9fd7216ed6c0c775235e83a18" exitCode=0 Feb 02 13:57:08 crc kubenswrapper[4955]: I0202 13:57:08.882550 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjvrf" event={"ID":"131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3","Type":"ContainerDied","Data":"1a38ebdf503f20a6ac081d080fb796c5c0a005d9fd7216ed6c0c775235e83a18"} Feb 02 13:57:08 crc kubenswrapper[4955]: I0202 13:57:08.886137 4955 generic.go:334] "Generic (PLEG): container finished" podID="d9553e41-1810-4447-8dbe-559e6fe29a74" containerID="db5da6b822c087fd45fde67c17124ab53c61e84211fdf064764f1b242f3fe7b0" exitCode=0 Feb 02 13:57:08 crc kubenswrapper[4955]: I0202 13:57:08.886199 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgjnb" event={"ID":"d9553e41-1810-4447-8dbe-559e6fe29a74","Type":"ContainerDied","Data":"db5da6b822c087fd45fde67c17124ab53c61e84211fdf064764f1b242f3fe7b0"} Feb 02 13:57:09 crc kubenswrapper[4955]: I0202 13:57:09.897712 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjvrf" event={"ID":"131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3","Type":"ContainerStarted","Data":"e7e3f81362bb81c95d1fbd08675f3e16a170608f671c08786bf07048e107faaa"} Feb 02 13:57:09 crc kubenswrapper[4955]: I0202 13:57:09.899957 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgjnb" event={"ID":"d9553e41-1810-4447-8dbe-559e6fe29a74","Type":"ContainerStarted","Data":"d840d7dd4f205c9ca330043192ff08648dc09d96b6e30fc84505e7e014c24efa"} Feb 02 13:57:09 crc kubenswrapper[4955]: I0202 13:57:09.926421 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kjvrf" podStartSLOduration=3.465004274 podStartE2EDuration="7.926396108s" podCreationTimestamp="2026-02-02 13:57:02 +0000 UTC" firstStartedPulling="2026-02-02 13:57:04.838254608 +0000 UTC m=+3275.750591058" lastFinishedPulling="2026-02-02 13:57:09.299646442 +0000 UTC m=+3280.211982892" observedRunningTime="2026-02-02 13:57:09.920988305 +0000 UTC m=+3280.833324755" watchObservedRunningTime="2026-02-02 13:57:09.926396108 +0000 UTC m=+3280.838732558" Feb 02 13:57:09 crc kubenswrapper[4955]: I0202 13:57:09.943198 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qgjnb" podStartSLOduration=2.170685529 podStartE2EDuration="6.94317675s" podCreationTimestamp="2026-02-02 13:57:03 +0000 UTC" firstStartedPulling="2026-02-02 13:57:04.834325541 +0000 UTC m=+3275.746661981" lastFinishedPulling="2026-02-02 13:57:09.606816752 +0000 UTC m=+3280.519153202" observedRunningTime="2026-02-02 13:57:09.939012888 +0000 UTC m=+3280.851349368" watchObservedRunningTime="2026-02-02 13:57:09.94317675 +0000 UTC m=+3280.855513200" Feb 02 13:57:13 crc kubenswrapper[4955]: I0202 13:57:13.193373 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kjvrf" Feb 02 13:57:13 crc kubenswrapper[4955]: I0202 13:57:13.193929 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kjvrf" Feb 02 13:57:13 crc kubenswrapper[4955]: I0202 13:57:13.242397 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kjvrf" Feb 02 13:57:14 crc kubenswrapper[4955]: I0202 13:57:14.185905 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qgjnb" Feb 02 13:57:14 crc kubenswrapper[4955]: I0202 13:57:14.185947 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qgjnb" Feb 02 13:57:14 crc kubenswrapper[4955]: I0202 13:57:14.229860 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qgjnb" Feb 02 13:57:14 crc kubenswrapper[4955]: I0202 13:57:14.990855 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qgjnb" Feb 02 13:57:15 crc kubenswrapper[4955]: I0202 13:57:15.450817 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qgjnb"] Feb 02 13:57:16 crc kubenswrapper[4955]: I0202 13:57:16.965301 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qgjnb" podUID="d9553e41-1810-4447-8dbe-559e6fe29a74" containerName="registry-server" containerID="cri-o://d840d7dd4f205c9ca330043192ff08648dc09d96b6e30fc84505e7e014c24efa" gracePeriod=2 Feb 02 13:57:17 crc kubenswrapper[4955]: I0202 13:57:17.418719 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgjnb" Feb 02 13:57:17 crc kubenswrapper[4955]: I0202 13:57:17.514004 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9553e41-1810-4447-8dbe-559e6fe29a74-utilities\") pod \"d9553e41-1810-4447-8dbe-559e6fe29a74\" (UID: \"d9553e41-1810-4447-8dbe-559e6fe29a74\") " Feb 02 13:57:17 crc kubenswrapper[4955]: I0202 13:57:17.514162 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tst9s\" (UniqueName: \"kubernetes.io/projected/d9553e41-1810-4447-8dbe-559e6fe29a74-kube-api-access-tst9s\") pod \"d9553e41-1810-4447-8dbe-559e6fe29a74\" (UID: \"d9553e41-1810-4447-8dbe-559e6fe29a74\") " Feb 02 13:57:17 crc kubenswrapper[4955]: I0202 13:57:17.514230 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9553e41-1810-4447-8dbe-559e6fe29a74-catalog-content\") pod \"d9553e41-1810-4447-8dbe-559e6fe29a74\" (UID: \"d9553e41-1810-4447-8dbe-559e6fe29a74\") " Feb 02 13:57:17 crc kubenswrapper[4955]: I0202 13:57:17.515206 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9553e41-1810-4447-8dbe-559e6fe29a74-utilities" (OuterVolumeSpecName: "utilities") pod "d9553e41-1810-4447-8dbe-559e6fe29a74" (UID: "d9553e41-1810-4447-8dbe-559e6fe29a74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:57:17 crc kubenswrapper[4955]: I0202 13:57:17.520695 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9553e41-1810-4447-8dbe-559e6fe29a74-kube-api-access-tst9s" (OuterVolumeSpecName: "kube-api-access-tst9s") pod "d9553e41-1810-4447-8dbe-559e6fe29a74" (UID: "d9553e41-1810-4447-8dbe-559e6fe29a74"). InnerVolumeSpecName "kube-api-access-tst9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:57:17 crc kubenswrapper[4955]: I0202 13:57:17.569688 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9553e41-1810-4447-8dbe-559e6fe29a74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9553e41-1810-4447-8dbe-559e6fe29a74" (UID: "d9553e41-1810-4447-8dbe-559e6fe29a74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:57:17 crc kubenswrapper[4955]: I0202 13:57:17.616635 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tst9s\" (UniqueName: \"kubernetes.io/projected/d9553e41-1810-4447-8dbe-559e6fe29a74-kube-api-access-tst9s\") on node \"crc\" DevicePath \"\"" Feb 02 13:57:17 crc kubenswrapper[4955]: I0202 13:57:17.616681 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9553e41-1810-4447-8dbe-559e6fe29a74-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:57:17 crc kubenswrapper[4955]: I0202 13:57:17.616692 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9553e41-1810-4447-8dbe-559e6fe29a74-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:57:17 crc kubenswrapper[4955]: I0202 13:57:17.977263 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgjnb" Feb 02 13:57:17 crc kubenswrapper[4955]: I0202 13:57:17.977291 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgjnb" event={"ID":"d9553e41-1810-4447-8dbe-559e6fe29a74","Type":"ContainerDied","Data":"d840d7dd4f205c9ca330043192ff08648dc09d96b6e30fc84505e7e014c24efa"} Feb 02 13:57:17 crc kubenswrapper[4955]: I0202 13:57:17.977342 4955 scope.go:117] "RemoveContainer" containerID="d840d7dd4f205c9ca330043192ff08648dc09d96b6e30fc84505e7e014c24efa" Feb 02 13:57:17 crc kubenswrapper[4955]: I0202 13:57:17.977186 4955 generic.go:334] "Generic (PLEG): container finished" podID="d9553e41-1810-4447-8dbe-559e6fe29a74" containerID="d840d7dd4f205c9ca330043192ff08648dc09d96b6e30fc84505e7e014c24efa" exitCode=0 Feb 02 13:57:17 crc kubenswrapper[4955]: I0202 13:57:17.977687 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgjnb" event={"ID":"d9553e41-1810-4447-8dbe-559e6fe29a74","Type":"ContainerDied","Data":"23631c221bb6ff881e9f9357312593250f3bea6f3add3606d05bf7b47c18d18d"} Feb 02 13:57:18 crc kubenswrapper[4955]: I0202 13:57:18.002203 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qgjnb"] Feb 02 13:57:18 crc kubenswrapper[4955]: I0202 13:57:18.008935 4955 scope.go:117] "RemoveContainer" containerID="db5da6b822c087fd45fde67c17124ab53c61e84211fdf064764f1b242f3fe7b0" Feb 02 13:57:18 crc kubenswrapper[4955]: I0202 13:57:18.010564 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qgjnb"] Feb 02 13:57:18 crc kubenswrapper[4955]: I0202 13:57:18.030831 4955 scope.go:117] "RemoveContainer" containerID="2dfc0c6b14d0323336544319e54938d57524e0494d802ad0bdd18500125395bc" Feb 02 13:57:18 crc kubenswrapper[4955]: I0202 13:57:18.078805 4955 scope.go:117] "RemoveContainer" containerID="d840d7dd4f205c9ca330043192ff08648dc09d96b6e30fc84505e7e014c24efa" Feb 02 13:57:18 crc kubenswrapper[4955]: E0202 13:57:18.079172 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d840d7dd4f205c9ca330043192ff08648dc09d96b6e30fc84505e7e014c24efa\": container with ID starting with d840d7dd4f205c9ca330043192ff08648dc09d96b6e30fc84505e7e014c24efa not found: ID does not exist" containerID="d840d7dd4f205c9ca330043192ff08648dc09d96b6e30fc84505e7e014c24efa" Feb 02 13:57:18 crc kubenswrapper[4955]: I0202 13:57:18.079204 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d840d7dd4f205c9ca330043192ff08648dc09d96b6e30fc84505e7e014c24efa"} err="failed to get container status \"d840d7dd4f205c9ca330043192ff08648dc09d96b6e30fc84505e7e014c24efa\": rpc error: code = NotFound desc = could not find container \"d840d7dd4f205c9ca330043192ff08648dc09d96b6e30fc84505e7e014c24efa\": container with ID starting with d840d7dd4f205c9ca330043192ff08648dc09d96b6e30fc84505e7e014c24efa not found: ID does not exist" Feb 02 13:57:18 crc kubenswrapper[4955]: I0202 13:57:18.079225 4955 scope.go:117] "RemoveContainer" containerID="db5da6b822c087fd45fde67c17124ab53c61e84211fdf064764f1b242f3fe7b0" Feb 02 13:57:18 crc kubenswrapper[4955]: E0202 13:57:18.079455 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db5da6b822c087fd45fde67c17124ab53c61e84211fdf064764f1b242f3fe7b0\": container with ID starting with db5da6b822c087fd45fde67c17124ab53c61e84211fdf064764f1b242f3fe7b0 not found: ID does not exist" containerID="db5da6b822c087fd45fde67c17124ab53c61e84211fdf064764f1b242f3fe7b0" Feb 02 13:57:18 crc kubenswrapper[4955]: I0202 13:57:18.079487 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db5da6b822c087fd45fde67c17124ab53c61e84211fdf064764f1b242f3fe7b0"} err="failed to get container status \"db5da6b822c087fd45fde67c17124ab53c61e84211fdf064764f1b242f3fe7b0\": rpc error: code = NotFound desc = could not find container \"db5da6b822c087fd45fde67c17124ab53c61e84211fdf064764f1b242f3fe7b0\": container with ID starting with db5da6b822c087fd45fde67c17124ab53c61e84211fdf064764f1b242f3fe7b0 not found: ID does not exist" Feb 02 13:57:18 crc kubenswrapper[4955]: I0202 13:57:18.079510 4955 scope.go:117] "RemoveContainer" containerID="2dfc0c6b14d0323336544319e54938d57524e0494d802ad0bdd18500125395bc" Feb 02 13:57:18 crc kubenswrapper[4955]: E0202 13:57:18.079828 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dfc0c6b14d0323336544319e54938d57524e0494d802ad0bdd18500125395bc\": container with ID starting with 2dfc0c6b14d0323336544319e54938d57524e0494d802ad0bdd18500125395bc not found: ID does not exist" containerID="2dfc0c6b14d0323336544319e54938d57524e0494d802ad0bdd18500125395bc" Feb 02 13:57:18 crc kubenswrapper[4955]: I0202 13:57:18.079849 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dfc0c6b14d0323336544319e54938d57524e0494d802ad0bdd18500125395bc"} err="failed to get container status \"2dfc0c6b14d0323336544319e54938d57524e0494d802ad0bdd18500125395bc\": rpc error: code = NotFound desc = could not find container \"2dfc0c6b14d0323336544319e54938d57524e0494d802ad0bdd18500125395bc\": container with ID starting with 2dfc0c6b14d0323336544319e54938d57524e0494d802ad0bdd18500125395bc not found: ID does not exist" Feb 02 13:57:19 crc kubenswrapper[4955]: I0202 13:57:19.727918 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9553e41-1810-4447-8dbe-559e6fe29a74" path="/var/lib/kubelet/pods/d9553e41-1810-4447-8dbe-559e6fe29a74/volumes" Feb 02 13:57:23 crc kubenswrapper[4955]: I0202 13:57:23.242215 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kjvrf" Feb 02 13:57:23 crc kubenswrapper[4955]: I0202 13:57:23.290457 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kjvrf"] Feb 02 13:57:24 crc kubenswrapper[4955]: I0202 13:57:24.032554 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kjvrf" podUID="131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3" containerName="registry-server" containerID="cri-o://e7e3f81362bb81c95d1fbd08675f3e16a170608f671c08786bf07048e107faaa" gracePeriod=2 Feb 02 13:57:24 crc kubenswrapper[4955]: I0202 13:57:24.491295 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjvrf" Feb 02 13:57:24 crc kubenswrapper[4955]: I0202 13:57:24.549681 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3-catalog-content\") pod \"131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3\" (UID: \"131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3\") " Feb 02 13:57:24 crc kubenswrapper[4955]: I0202 13:57:24.550032 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3-utilities\") pod \"131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3\" (UID: \"131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3\") " Feb 02 13:57:24 crc kubenswrapper[4955]: I0202 13:57:24.551076 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3-utilities" (OuterVolumeSpecName: "utilities") pod "131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3" (UID: "131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:57:24 crc kubenswrapper[4955]: I0202 13:57:24.551360 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8xcm\" (UniqueName: \"kubernetes.io/projected/131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3-kube-api-access-j8xcm\") pod \"131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3\" (UID: \"131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3\") " Feb 02 13:57:24 crc kubenswrapper[4955]: I0202 13:57:24.553849 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:57:24 crc kubenswrapper[4955]: I0202 13:57:24.561963 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3-kube-api-access-j8xcm" (OuterVolumeSpecName: "kube-api-access-j8xcm") pod "131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3" (UID: "131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3"). InnerVolumeSpecName "kube-api-access-j8xcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:57:24 crc kubenswrapper[4955]: I0202 13:57:24.611160 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3" (UID: "131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:57:24 crc kubenswrapper[4955]: I0202 13:57:24.657788 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:57:24 crc kubenswrapper[4955]: I0202 13:57:24.657855 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8xcm\" (UniqueName: \"kubernetes.io/projected/131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3-kube-api-access-j8xcm\") on node \"crc\" DevicePath \"\"" Feb 02 13:57:25 crc kubenswrapper[4955]: I0202 13:57:25.043697 4955 generic.go:334] "Generic (PLEG): container finished" podID="131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3" containerID="e7e3f81362bb81c95d1fbd08675f3e16a170608f671c08786bf07048e107faaa" exitCode=0 Feb 02 13:57:25 crc kubenswrapper[4955]: I0202 13:57:25.043743 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjvrf" event={"ID":"131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3","Type":"ContainerDied","Data":"e7e3f81362bb81c95d1fbd08675f3e16a170608f671c08786bf07048e107faaa"} Feb 02 13:57:25 crc kubenswrapper[4955]: I0202 13:57:25.043775 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjvrf" event={"ID":"131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3","Type":"ContainerDied","Data":"493706792ad55b634a483e2251b01eb3c83b32fd169539056162644f7221b4ac"} Feb 02 13:57:25 crc kubenswrapper[4955]: I0202 13:57:25.043796 4955 scope.go:117] "RemoveContainer" containerID="e7e3f81362bb81c95d1fbd08675f3e16a170608f671c08786bf07048e107faaa" Feb 02 13:57:25 crc kubenswrapper[4955]: I0202 13:57:25.043810 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjvrf" Feb 02 13:57:25 crc kubenswrapper[4955]: I0202 13:57:25.092296 4955 scope.go:117] "RemoveContainer" containerID="1a38ebdf503f20a6ac081d080fb796c5c0a005d9fd7216ed6c0c775235e83a18" Feb 02 13:57:25 crc kubenswrapper[4955]: I0202 13:57:25.094985 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kjvrf"] Feb 02 13:57:25 crc kubenswrapper[4955]: I0202 13:57:25.106263 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kjvrf"] Feb 02 13:57:25 crc kubenswrapper[4955]: I0202 13:57:25.116648 4955 scope.go:117] "RemoveContainer" containerID="4a68cf3883ae2857cbe2c7a2241ae1ee1fe39e57daff1a9546382b25946573a8" Feb 02 13:57:25 crc kubenswrapper[4955]: I0202 13:57:25.155955 4955 scope.go:117] "RemoveContainer" containerID="e7e3f81362bb81c95d1fbd08675f3e16a170608f671c08786bf07048e107faaa" Feb 02 13:57:25 crc kubenswrapper[4955]: E0202 13:57:25.156399 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7e3f81362bb81c95d1fbd08675f3e16a170608f671c08786bf07048e107faaa\": container with ID starting with e7e3f81362bb81c95d1fbd08675f3e16a170608f671c08786bf07048e107faaa not found: ID does not exist" containerID="e7e3f81362bb81c95d1fbd08675f3e16a170608f671c08786bf07048e107faaa" Feb 02 13:57:25 crc kubenswrapper[4955]: I0202 13:57:25.156462 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7e3f81362bb81c95d1fbd08675f3e16a170608f671c08786bf07048e107faaa"} err="failed to get container status \"e7e3f81362bb81c95d1fbd08675f3e16a170608f671c08786bf07048e107faaa\": rpc error: code = NotFound desc = could not find container \"e7e3f81362bb81c95d1fbd08675f3e16a170608f671c08786bf07048e107faaa\": container with ID starting with e7e3f81362bb81c95d1fbd08675f3e16a170608f671c08786bf07048e107faaa not found: ID does not exist" Feb 02 13:57:25 crc kubenswrapper[4955]: I0202 13:57:25.156495 4955 scope.go:117] "RemoveContainer" containerID="1a38ebdf503f20a6ac081d080fb796c5c0a005d9fd7216ed6c0c775235e83a18" Feb 02 13:57:25 crc kubenswrapper[4955]: E0202 13:57:25.157039 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a38ebdf503f20a6ac081d080fb796c5c0a005d9fd7216ed6c0c775235e83a18\": container with ID starting with 1a38ebdf503f20a6ac081d080fb796c5c0a005d9fd7216ed6c0c775235e83a18 not found: ID does not exist" containerID="1a38ebdf503f20a6ac081d080fb796c5c0a005d9fd7216ed6c0c775235e83a18" Feb 02 13:57:25 crc kubenswrapper[4955]: I0202 13:57:25.157078 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a38ebdf503f20a6ac081d080fb796c5c0a005d9fd7216ed6c0c775235e83a18"} err="failed to get container status \"1a38ebdf503f20a6ac081d080fb796c5c0a005d9fd7216ed6c0c775235e83a18\": rpc error: code = NotFound desc = could not find container \"1a38ebdf503f20a6ac081d080fb796c5c0a005d9fd7216ed6c0c775235e83a18\": container with ID starting with 1a38ebdf503f20a6ac081d080fb796c5c0a005d9fd7216ed6c0c775235e83a18 not found: ID does not exist" Feb 02 13:57:25 crc kubenswrapper[4955]: I0202 13:57:25.157111 4955 scope.go:117] "RemoveContainer" containerID="4a68cf3883ae2857cbe2c7a2241ae1ee1fe39e57daff1a9546382b25946573a8" Feb 02 13:57:25 crc kubenswrapper[4955]: E0202 13:57:25.157603 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a68cf3883ae2857cbe2c7a2241ae1ee1fe39e57daff1a9546382b25946573a8\": container with ID starting with 4a68cf3883ae2857cbe2c7a2241ae1ee1fe39e57daff1a9546382b25946573a8 not found: ID does not exist" containerID="4a68cf3883ae2857cbe2c7a2241ae1ee1fe39e57daff1a9546382b25946573a8" Feb 02 13:57:25 crc kubenswrapper[4955]: I0202 13:57:25.157638 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a68cf3883ae2857cbe2c7a2241ae1ee1fe39e57daff1a9546382b25946573a8"} err="failed to get container status \"4a68cf3883ae2857cbe2c7a2241ae1ee1fe39e57daff1a9546382b25946573a8\": rpc error: code = NotFound desc = could not find container \"4a68cf3883ae2857cbe2c7a2241ae1ee1fe39e57daff1a9546382b25946573a8\": container with ID starting with 4a68cf3883ae2857cbe2c7a2241ae1ee1fe39e57daff1a9546382b25946573a8 not found: ID does not exist" Feb 02 13:57:25 crc kubenswrapper[4955]: I0202 13:57:25.727772 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3" path="/var/lib/kubelet/pods/131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3/volumes" Feb 02 13:58:03 crc kubenswrapper[4955]: I0202 13:58:03.016792 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:58:03 crc kubenswrapper[4955]: I0202 13:58:03.017276 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:58:33 crc kubenswrapper[4955]: I0202 13:58:33.017084 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:58:33 crc kubenswrapper[4955]: I0202 13:58:33.017649 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:59:03 crc kubenswrapper[4955]: I0202 13:59:03.017102 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:59:03 crc kubenswrapper[4955]: I0202 13:59:03.018089 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:59:03 crc kubenswrapper[4955]: I0202 13:59:03.018178 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 13:59:03 crc kubenswrapper[4955]: I0202 13:59:03.019670 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a885561f2dbf1d8ee032bf28e04cb51494deebc8003c906669492185d54028be"} pod="openshift-machine-config-operator/machine-config-daemon-6l62h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:59:03 crc kubenswrapper[4955]: I0202 13:59:03.019743 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" containerID="cri-o://a885561f2dbf1d8ee032bf28e04cb51494deebc8003c906669492185d54028be" gracePeriod=600 Feb 02 13:59:03 crc kubenswrapper[4955]: I0202 13:59:03.932299 4955 generic.go:334] "Generic (PLEG): container finished" podID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerID="a885561f2dbf1d8ee032bf28e04cb51494deebc8003c906669492185d54028be" exitCode=0 Feb 02 13:59:03 crc kubenswrapper[4955]: I0202 13:59:03.932384 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerDied","Data":"a885561f2dbf1d8ee032bf28e04cb51494deebc8003c906669492185d54028be"} Feb 02 13:59:03 crc kubenswrapper[4955]: I0202 13:59:03.932765 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerStarted","Data":"6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3"} Feb 02 13:59:03 crc kubenswrapper[4955]: I0202 13:59:03.932807 4955 scope.go:117] "RemoveContainer" containerID="0ee414e34fa55672a8ea471c4294ee0b0f888fdf91ea9f00fc453e256604d902" Feb 02 13:59:51 crc kubenswrapper[4955]: I0202 13:59:51.670490 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lj6jw"] Feb 02 13:59:51 crc kubenswrapper[4955]: E0202 13:59:51.671799 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9553e41-1810-4447-8dbe-559e6fe29a74" containerName="registry-server" Feb 02 13:59:51 crc kubenswrapper[4955]: I0202 13:59:51.671818 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9553e41-1810-4447-8dbe-559e6fe29a74" containerName="registry-server" Feb 02 13:59:51 crc kubenswrapper[4955]: E0202 13:59:51.671835 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3" containerName="extract-utilities" Feb 02 13:59:51 crc kubenswrapper[4955]: I0202 13:59:51.671844 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3" containerName="extract-utilities" Feb 02 13:59:51 crc kubenswrapper[4955]: E0202 13:59:51.671883 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9553e41-1810-4447-8dbe-559e6fe29a74" containerName="extract-utilities" Feb 02 13:59:51 crc kubenswrapper[4955]: I0202 13:59:51.671894 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9553e41-1810-4447-8dbe-559e6fe29a74" containerName="extract-utilities" Feb 02 13:59:51 crc kubenswrapper[4955]: E0202 13:59:51.671918 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9553e41-1810-4447-8dbe-559e6fe29a74" containerName="extract-content" Feb 02 13:59:51 crc kubenswrapper[4955]: I0202 13:59:51.671926 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9553e41-1810-4447-8dbe-559e6fe29a74" containerName="extract-content" Feb 02 13:59:51 crc kubenswrapper[4955]: E0202 13:59:51.671945 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3" containerName="extract-content" Feb 02 13:59:51 crc kubenswrapper[4955]: I0202 13:59:51.671953 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3" containerName="extract-content" Feb 02 13:59:51 crc kubenswrapper[4955]: E0202 13:59:51.671974 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3" containerName="registry-server" Feb 02 13:59:51 crc kubenswrapper[4955]: I0202 13:59:51.671981 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3" containerName="registry-server" Feb 02 13:59:51 crc kubenswrapper[4955]: I0202 13:59:51.672226 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9553e41-1810-4447-8dbe-559e6fe29a74" containerName="registry-server" Feb 02 13:59:51 crc kubenswrapper[4955]: I0202 13:59:51.672247 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="131b9d1d-0ac8-4741-a0ff-4dc37a1a31b3" containerName="registry-server" Feb 02 13:59:51 crc kubenswrapper[4955]: I0202 13:59:51.674013 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lj6jw" Feb 02 13:59:51 crc kubenswrapper[4955]: I0202 13:59:51.689253 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lj6jw"] Feb 02 13:59:51 crc kubenswrapper[4955]: I0202 13:59:51.852076 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-726md\" (UniqueName: \"kubernetes.io/projected/7b765d4b-5556-4eca-9ada-c9b85d669d94-kube-api-access-726md\") pod \"redhat-operators-lj6jw\" (UID: \"7b765d4b-5556-4eca-9ada-c9b85d669d94\") " pod="openshift-marketplace/redhat-operators-lj6jw" Feb 02 13:59:51 crc kubenswrapper[4955]: I0202 13:59:51.852182 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b765d4b-5556-4eca-9ada-c9b85d669d94-utilities\") pod \"redhat-operators-lj6jw\" (UID: \"7b765d4b-5556-4eca-9ada-c9b85d669d94\") " pod="openshift-marketplace/redhat-operators-lj6jw" Feb 02 13:59:51 crc kubenswrapper[4955]: I0202 13:59:51.852216 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b765d4b-5556-4eca-9ada-c9b85d669d94-catalog-content\") pod \"redhat-operators-lj6jw\" (UID: \"7b765d4b-5556-4eca-9ada-c9b85d669d94\") " pod="openshift-marketplace/redhat-operators-lj6jw" Feb 02 13:59:51 crc kubenswrapper[4955]: I0202 13:59:51.953882 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-726md\" (UniqueName: \"kubernetes.io/projected/7b765d4b-5556-4eca-9ada-c9b85d669d94-kube-api-access-726md\") pod \"redhat-operators-lj6jw\" (UID: \"7b765d4b-5556-4eca-9ada-c9b85d669d94\") " pod="openshift-marketplace/redhat-operators-lj6jw" Feb 02 13:59:51 crc kubenswrapper[4955]: I0202 13:59:51.953973 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b765d4b-5556-4eca-9ada-c9b85d669d94-utilities\") pod \"redhat-operators-lj6jw\" (UID: \"7b765d4b-5556-4eca-9ada-c9b85d669d94\") " pod="openshift-marketplace/redhat-operators-lj6jw" Feb 02 13:59:51 crc kubenswrapper[4955]: I0202 13:59:51.954000 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b765d4b-5556-4eca-9ada-c9b85d669d94-catalog-content\") pod \"redhat-operators-lj6jw\" (UID: \"7b765d4b-5556-4eca-9ada-c9b85d669d94\") " pod="openshift-marketplace/redhat-operators-lj6jw" Feb 02 13:59:51 crc kubenswrapper[4955]: I0202 13:59:51.954530 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b765d4b-5556-4eca-9ada-c9b85d669d94-utilities\") pod \"redhat-operators-lj6jw\" (UID: \"7b765d4b-5556-4eca-9ada-c9b85d669d94\") " pod="openshift-marketplace/redhat-operators-lj6jw" Feb 02 13:59:51 crc kubenswrapper[4955]: I0202 13:59:51.954622 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b765d4b-5556-4eca-9ada-c9b85d669d94-catalog-content\") pod \"redhat-operators-lj6jw\" (UID: \"7b765d4b-5556-4eca-9ada-c9b85d669d94\") " pod="openshift-marketplace/redhat-operators-lj6jw" Feb 02 13:59:51 crc kubenswrapper[4955]: I0202 13:59:51.976929 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-726md\" (UniqueName: \"kubernetes.io/projected/7b765d4b-5556-4eca-9ada-c9b85d669d94-kube-api-access-726md\") pod \"redhat-operators-lj6jw\" (UID: \"7b765d4b-5556-4eca-9ada-c9b85d669d94\") " pod="openshift-marketplace/redhat-operators-lj6jw" Feb 02 13:59:52 crc kubenswrapper[4955]: I0202 13:59:52.005818 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lj6jw" Feb 02 13:59:52 crc kubenswrapper[4955]: I0202 13:59:52.517193 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lj6jw"] Feb 02 13:59:52 crc kubenswrapper[4955]: W0202 13:59:52.520175 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b765d4b_5556_4eca_9ada_c9b85d669d94.slice/crio-8733b311ae8c4225cf69969384f43d575a2217fe6ed6485a141005b415934776 WatchSource:0}: Error finding container 8733b311ae8c4225cf69969384f43d575a2217fe6ed6485a141005b415934776: Status 404 returned error can't find the container with id 8733b311ae8c4225cf69969384f43d575a2217fe6ed6485a141005b415934776 Feb 02 13:59:53 crc kubenswrapper[4955]: I0202 13:59:53.365957 4955 generic.go:334] "Generic (PLEG): container finished" podID="7b765d4b-5556-4eca-9ada-c9b85d669d94" containerID="06ab7cc50f20639e1bc6ccac95584134d4d0e9eb314f61ad43d893b580e6a6fa" exitCode=0 Feb 02 13:59:53 crc kubenswrapper[4955]: I0202 13:59:53.366049 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lj6jw" event={"ID":"7b765d4b-5556-4eca-9ada-c9b85d669d94","Type":"ContainerDied","Data":"06ab7cc50f20639e1bc6ccac95584134d4d0e9eb314f61ad43d893b580e6a6fa"} Feb 02 13:59:53 crc kubenswrapper[4955]: I0202 13:59:53.366497 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lj6jw" event={"ID":"7b765d4b-5556-4eca-9ada-c9b85d669d94","Type":"ContainerStarted","Data":"8733b311ae8c4225cf69969384f43d575a2217fe6ed6485a141005b415934776"} Feb 02 13:59:55 crc kubenswrapper[4955]: I0202 13:59:55.392882 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lj6jw" event={"ID":"7b765d4b-5556-4eca-9ada-c9b85d669d94","Type":"ContainerStarted","Data":"6091104ce1e551dc0ce561420dd7ef1c2be17ebdc15264cc7b5bdd6ad1df480a"} Feb 02 13:59:59 crc kubenswrapper[4955]: I0202 13:59:59.428695 4955 generic.go:334] "Generic (PLEG): container finished" podID="7b765d4b-5556-4eca-9ada-c9b85d669d94" containerID="6091104ce1e551dc0ce561420dd7ef1c2be17ebdc15264cc7b5bdd6ad1df480a" exitCode=0 Feb 02 13:59:59 crc kubenswrapper[4955]: I0202 13:59:59.428777 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lj6jw" event={"ID":"7b765d4b-5556-4eca-9ada-c9b85d669d94","Type":"ContainerDied","Data":"6091104ce1e551dc0ce561420dd7ef1c2be17ebdc15264cc7b5bdd6ad1df480a"} Feb 02 14:00:00 crc kubenswrapper[4955]: I0202 14:00:00.188148 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500680-w2wvw"] Feb 02 14:00:00 crc kubenswrapper[4955]: I0202 14:00:00.189820 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-w2wvw" Feb 02 14:00:00 crc kubenswrapper[4955]: I0202 14:00:00.196077 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 14:00:00 crc kubenswrapper[4955]: I0202 14:00:00.196958 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 14:00:00 crc kubenswrapper[4955]: I0202 14:00:00.205446 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500680-w2wvw"] Feb 02 14:00:00 crc kubenswrapper[4955]: I0202 14:00:00.322369 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c731bf44-2d70-4227-b7f1-87fc32a0b596-secret-volume\") pod \"collect-profiles-29500680-w2wvw\" (UID: \"c731bf44-2d70-4227-b7f1-87fc32a0b596\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-w2wvw" Feb 02 14:00:00 crc kubenswrapper[4955]: I0202 14:00:00.322704 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c731bf44-2d70-4227-b7f1-87fc32a0b596-config-volume\") pod \"collect-profiles-29500680-w2wvw\" (UID: \"c731bf44-2d70-4227-b7f1-87fc32a0b596\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-w2wvw" Feb 02 14:00:00 crc kubenswrapper[4955]: I0202 14:00:00.322778 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jpw4\" (UniqueName: \"kubernetes.io/projected/c731bf44-2d70-4227-b7f1-87fc32a0b596-kube-api-access-5jpw4\") pod \"collect-profiles-29500680-w2wvw\" (UID: \"c731bf44-2d70-4227-b7f1-87fc32a0b596\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-w2wvw" Feb 02 14:00:00 crc kubenswrapper[4955]: I0202 14:00:00.424799 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c731bf44-2d70-4227-b7f1-87fc32a0b596-secret-volume\") pod \"collect-profiles-29500680-w2wvw\" (UID: \"c731bf44-2d70-4227-b7f1-87fc32a0b596\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-w2wvw" Feb 02 14:00:00 crc kubenswrapper[4955]: I0202 14:00:00.424897 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c731bf44-2d70-4227-b7f1-87fc32a0b596-config-volume\") pod \"collect-profiles-29500680-w2wvw\" (UID: \"c731bf44-2d70-4227-b7f1-87fc32a0b596\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-w2wvw" Feb 02 14:00:00 crc kubenswrapper[4955]: I0202 14:00:00.424974 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jpw4\" (UniqueName: \"kubernetes.io/projected/c731bf44-2d70-4227-b7f1-87fc32a0b596-kube-api-access-5jpw4\") pod \"collect-profiles-29500680-w2wvw\" (UID: \"c731bf44-2d70-4227-b7f1-87fc32a0b596\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-w2wvw" Feb 02 14:00:00 crc kubenswrapper[4955]: I0202 14:00:00.426176 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c731bf44-2d70-4227-b7f1-87fc32a0b596-config-volume\") pod \"collect-profiles-29500680-w2wvw\" (UID: \"c731bf44-2d70-4227-b7f1-87fc32a0b596\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-w2wvw" Feb 02 14:00:00 crc kubenswrapper[4955]: I0202 14:00:00.432500 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c731bf44-2d70-4227-b7f1-87fc32a0b596-secret-volume\") pod \"collect-profiles-29500680-w2wvw\" (UID: \"c731bf44-2d70-4227-b7f1-87fc32a0b596\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-w2wvw" Feb 02 14:00:00 crc kubenswrapper[4955]: I0202 14:00:00.453238 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jpw4\" (UniqueName: \"kubernetes.io/projected/c731bf44-2d70-4227-b7f1-87fc32a0b596-kube-api-access-5jpw4\") pod \"collect-profiles-29500680-w2wvw\" (UID: \"c731bf44-2d70-4227-b7f1-87fc32a0b596\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-w2wvw" Feb 02 14:00:00 crc kubenswrapper[4955]: I0202 14:00:00.514686 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-w2wvw" Feb 02 14:00:00 crc kubenswrapper[4955]: W0202 14:00:00.997823 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc731bf44_2d70_4227_b7f1_87fc32a0b596.slice/crio-2420531e277143515b0e00bb7ff86d05fae7cae05f064646e13fb14bd9afa029 WatchSource:0}: Error finding container 2420531e277143515b0e00bb7ff86d05fae7cae05f064646e13fb14bd9afa029: Status 404 returned error can't find the container with id 2420531e277143515b0e00bb7ff86d05fae7cae05f064646e13fb14bd9afa029 Feb 02 14:00:00 crc kubenswrapper[4955]: I0202 14:00:00.997934 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500680-w2wvw"] Feb 02 14:00:01 crc kubenswrapper[4955]: I0202 14:00:01.451484 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-w2wvw" event={"ID":"c731bf44-2d70-4227-b7f1-87fc32a0b596","Type":"ContainerStarted","Data":"ad4b33519cb3f23e56f8f1d98a20f6eee1467d8c74eb1029a34f4a573477d352"} Feb 02 14:00:01 crc kubenswrapper[4955]: I0202 14:00:01.451808 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-w2wvw" event={"ID":"c731bf44-2d70-4227-b7f1-87fc32a0b596","Type":"ContainerStarted","Data":"2420531e277143515b0e00bb7ff86d05fae7cae05f064646e13fb14bd9afa029"} Feb 02 14:00:01 crc kubenswrapper[4955]: I0202 14:00:01.455071 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lj6jw" event={"ID":"7b765d4b-5556-4eca-9ada-c9b85d669d94","Type":"ContainerStarted","Data":"793ececd1fc7a60f117c7873b86d50286f4c1303ef696cdd60ab126e6098d6e1"} Feb 02 14:00:01 crc kubenswrapper[4955]: I0202 14:00:01.467541 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-w2wvw" podStartSLOduration=1.467524641 podStartE2EDuration="1.467524641s" podCreationTimestamp="2026-02-02 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:00:01.466796583 +0000 UTC m=+3452.379133023" watchObservedRunningTime="2026-02-02 14:00:01.467524641 +0000 UTC m=+3452.379861091" Feb 02 14:00:01 crc kubenswrapper[4955]: I0202 14:00:01.500195 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lj6jw" podStartSLOduration=3.411026923 podStartE2EDuration="10.500171408s" podCreationTimestamp="2026-02-02 13:59:51 +0000 UTC" firstStartedPulling="2026-02-02 13:59:53.368545991 +0000 UTC m=+3444.280882441" lastFinishedPulling="2026-02-02 14:00:00.457690466 +0000 UTC m=+3451.370026926" observedRunningTime="2026-02-02 14:00:01.487304434 +0000 UTC m=+3452.399640904" watchObservedRunningTime="2026-02-02 14:00:01.500171408 +0000 UTC m=+3452.412507858" Feb 02 14:00:02 crc kubenswrapper[4955]: I0202 14:00:02.006517 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lj6jw" Feb 02 14:00:02 crc kubenswrapper[4955]: I0202 14:00:02.007250 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lj6jw" Feb 02 14:00:02 crc kubenswrapper[4955]: I0202 14:00:02.468103 4955 generic.go:334] "Generic (PLEG): container finished" podID="c731bf44-2d70-4227-b7f1-87fc32a0b596" containerID="ad4b33519cb3f23e56f8f1d98a20f6eee1467d8c74eb1029a34f4a573477d352" exitCode=0 Feb 02 14:00:02 crc kubenswrapper[4955]: I0202 14:00:02.468194 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-w2wvw" event={"ID":"c731bf44-2d70-4227-b7f1-87fc32a0b596","Type":"ContainerDied","Data":"ad4b33519cb3f23e56f8f1d98a20f6eee1467d8c74eb1029a34f4a573477d352"} Feb 02 14:00:03 crc kubenswrapper[4955]: I0202 14:00:03.052702 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lj6jw" podUID="7b765d4b-5556-4eca-9ada-c9b85d669d94" containerName="registry-server" probeResult="failure" output=< Feb 02 14:00:03 crc kubenswrapper[4955]: timeout: failed to connect service ":50051" within 1s Feb 02 14:00:03 crc kubenswrapper[4955]: > Feb 02 14:00:03 crc kubenswrapper[4955]: I0202 14:00:03.825401 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-w2wvw" Feb 02 14:00:04 crc kubenswrapper[4955]: I0202 14:00:04.016869 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c731bf44-2d70-4227-b7f1-87fc32a0b596-secret-volume\") pod \"c731bf44-2d70-4227-b7f1-87fc32a0b596\" (UID: \"c731bf44-2d70-4227-b7f1-87fc32a0b596\") " Feb 02 14:00:04 crc kubenswrapper[4955]: I0202 14:00:04.017019 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jpw4\" (UniqueName: \"kubernetes.io/projected/c731bf44-2d70-4227-b7f1-87fc32a0b596-kube-api-access-5jpw4\") pod \"c731bf44-2d70-4227-b7f1-87fc32a0b596\" (UID: \"c731bf44-2d70-4227-b7f1-87fc32a0b596\") " Feb 02 14:00:04 crc kubenswrapper[4955]: I0202 14:00:04.017171 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c731bf44-2d70-4227-b7f1-87fc32a0b596-config-volume\") pod \"c731bf44-2d70-4227-b7f1-87fc32a0b596\" (UID: \"c731bf44-2d70-4227-b7f1-87fc32a0b596\") " Feb 02 14:00:04 crc kubenswrapper[4955]: I0202 14:00:04.018199 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c731bf44-2d70-4227-b7f1-87fc32a0b596-config-volume" (OuterVolumeSpecName: "config-volume") pod "c731bf44-2d70-4227-b7f1-87fc32a0b596" (UID: "c731bf44-2d70-4227-b7f1-87fc32a0b596"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:00:04 crc kubenswrapper[4955]: I0202 14:00:04.022069 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c731bf44-2d70-4227-b7f1-87fc32a0b596-kube-api-access-5jpw4" (OuterVolumeSpecName: "kube-api-access-5jpw4") pod "c731bf44-2d70-4227-b7f1-87fc32a0b596" (UID: "c731bf44-2d70-4227-b7f1-87fc32a0b596"). InnerVolumeSpecName "kube-api-access-5jpw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:00:04 crc kubenswrapper[4955]: I0202 14:00:04.025546 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c731bf44-2d70-4227-b7f1-87fc32a0b596-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c731bf44-2d70-4227-b7f1-87fc32a0b596" (UID: "c731bf44-2d70-4227-b7f1-87fc32a0b596"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:00:04 crc kubenswrapper[4955]: I0202 14:00:04.120566 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jpw4\" (UniqueName: \"kubernetes.io/projected/c731bf44-2d70-4227-b7f1-87fc32a0b596-kube-api-access-5jpw4\") on node \"crc\" DevicePath \"\"" Feb 02 14:00:04 crc kubenswrapper[4955]: I0202 14:00:04.120673 4955 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c731bf44-2d70-4227-b7f1-87fc32a0b596-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 14:00:04 crc kubenswrapper[4955]: I0202 14:00:04.120689 4955 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c731bf44-2d70-4227-b7f1-87fc32a0b596-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 14:00:04 crc kubenswrapper[4955]: I0202 14:00:04.492962 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-w2wvw" event={"ID":"c731bf44-2d70-4227-b7f1-87fc32a0b596","Type":"ContainerDied","Data":"2420531e277143515b0e00bb7ff86d05fae7cae05f064646e13fb14bd9afa029"} Feb 02 14:00:04 crc kubenswrapper[4955]: I0202 14:00:04.493011 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2420531e277143515b0e00bb7ff86d05fae7cae05f064646e13fb14bd9afa029" Feb 02 14:00:04 crc kubenswrapper[4955]: I0202 14:00:04.493016 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-w2wvw" Feb 02 14:00:04 crc kubenswrapper[4955]: I0202 14:00:04.545416 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500635-h58p4"] Feb 02 14:00:04 crc kubenswrapper[4955]: I0202 14:00:04.555163 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500635-h58p4"] Feb 02 14:00:05 crc kubenswrapper[4955]: I0202 14:00:05.730969 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e4033bc-b573-436c-97c6-7c3ea8fc8c02" path="/var/lib/kubelet/pods/7e4033bc-b573-436c-97c6-7c3ea8fc8c02/volumes" Feb 02 14:00:12 crc kubenswrapper[4955]: I0202 14:00:12.050620 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lj6jw" Feb 02 14:00:12 crc kubenswrapper[4955]: I0202 14:00:12.110646 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lj6jw" Feb 02 14:00:12 crc kubenswrapper[4955]: I0202 14:00:12.300909 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lj6jw"] Feb 02 14:00:13 crc kubenswrapper[4955]: I0202 14:00:13.573589 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lj6jw" podUID="7b765d4b-5556-4eca-9ada-c9b85d669d94" containerName="registry-server" containerID="cri-o://793ececd1fc7a60f117c7873b86d50286f4c1303ef696cdd60ab126e6098d6e1" gracePeriod=2 Feb 02 14:00:13 crc kubenswrapper[4955]: I0202 14:00:13.999361 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lj6jw" Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.125629 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b765d4b-5556-4eca-9ada-c9b85d669d94-utilities\") pod \"7b765d4b-5556-4eca-9ada-c9b85d669d94\" (UID: \"7b765d4b-5556-4eca-9ada-c9b85d669d94\") " Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.125793 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b765d4b-5556-4eca-9ada-c9b85d669d94-catalog-content\") pod \"7b765d4b-5556-4eca-9ada-c9b85d669d94\" (UID: \"7b765d4b-5556-4eca-9ada-c9b85d669d94\") " Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.125830 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-726md\" (UniqueName: \"kubernetes.io/projected/7b765d4b-5556-4eca-9ada-c9b85d669d94-kube-api-access-726md\") pod \"7b765d4b-5556-4eca-9ada-c9b85d669d94\" (UID: \"7b765d4b-5556-4eca-9ada-c9b85d669d94\") " Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.126694 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b765d4b-5556-4eca-9ada-c9b85d669d94-utilities" (OuterVolumeSpecName: "utilities") pod "7b765d4b-5556-4eca-9ada-c9b85d669d94" (UID: "7b765d4b-5556-4eca-9ada-c9b85d669d94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.133814 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b765d4b-5556-4eca-9ada-c9b85d669d94-kube-api-access-726md" (OuterVolumeSpecName: "kube-api-access-726md") pod "7b765d4b-5556-4eca-9ada-c9b85d669d94" (UID: "7b765d4b-5556-4eca-9ada-c9b85d669d94"). InnerVolumeSpecName "kube-api-access-726md". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.229546 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b765d4b-5556-4eca-9ada-c9b85d669d94-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.229624 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-726md\" (UniqueName: \"kubernetes.io/projected/7b765d4b-5556-4eca-9ada-c9b85d669d94-kube-api-access-726md\") on node \"crc\" DevicePath \"\"" Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.246819 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b765d4b-5556-4eca-9ada-c9b85d669d94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b765d4b-5556-4eca-9ada-c9b85d669d94" (UID: "7b765d4b-5556-4eca-9ada-c9b85d669d94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.331686 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b765d4b-5556-4eca-9ada-c9b85d669d94-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.586075 4955 generic.go:334] "Generic (PLEG): container finished" podID="7b765d4b-5556-4eca-9ada-c9b85d669d94" containerID="793ececd1fc7a60f117c7873b86d50286f4c1303ef696cdd60ab126e6098d6e1" exitCode=0 Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.586123 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lj6jw" event={"ID":"7b765d4b-5556-4eca-9ada-c9b85d669d94","Type":"ContainerDied","Data":"793ececd1fc7a60f117c7873b86d50286f4c1303ef696cdd60ab126e6098d6e1"} Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.586162 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lj6jw" event={"ID":"7b765d4b-5556-4eca-9ada-c9b85d669d94","Type":"ContainerDied","Data":"8733b311ae8c4225cf69969384f43d575a2217fe6ed6485a141005b415934776"} Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.586170 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lj6jw" Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.586184 4955 scope.go:117] "RemoveContainer" containerID="793ececd1fc7a60f117c7873b86d50286f4c1303ef696cdd60ab126e6098d6e1" Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.623742 4955 scope.go:117] "RemoveContainer" containerID="6091104ce1e551dc0ce561420dd7ef1c2be17ebdc15264cc7b5bdd6ad1df480a" Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.626283 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lj6jw"] Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.644862 4955 scope.go:117] "RemoveContainer" containerID="06ab7cc50f20639e1bc6ccac95584134d4d0e9eb314f61ad43d893b580e6a6fa" Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.650166 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lj6jw"] Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.690602 4955 scope.go:117] "RemoveContainer" containerID="793ececd1fc7a60f117c7873b86d50286f4c1303ef696cdd60ab126e6098d6e1" Feb 02 14:00:14 crc kubenswrapper[4955]: E0202 14:00:14.691248 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"793ececd1fc7a60f117c7873b86d50286f4c1303ef696cdd60ab126e6098d6e1\": container with ID starting with 793ececd1fc7a60f117c7873b86d50286f4c1303ef696cdd60ab126e6098d6e1 not found: ID does not exist" containerID="793ececd1fc7a60f117c7873b86d50286f4c1303ef696cdd60ab126e6098d6e1" Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.691315 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"793ececd1fc7a60f117c7873b86d50286f4c1303ef696cdd60ab126e6098d6e1"} err="failed to get container status \"793ececd1fc7a60f117c7873b86d50286f4c1303ef696cdd60ab126e6098d6e1\": rpc error: code = NotFound desc = could not find container \"793ececd1fc7a60f117c7873b86d50286f4c1303ef696cdd60ab126e6098d6e1\": container with ID starting with 793ececd1fc7a60f117c7873b86d50286f4c1303ef696cdd60ab126e6098d6e1 not found: ID does not exist" Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.691346 4955 scope.go:117] "RemoveContainer" containerID="6091104ce1e551dc0ce561420dd7ef1c2be17ebdc15264cc7b5bdd6ad1df480a" Feb 02 14:00:14 crc kubenswrapper[4955]: E0202 14:00:14.691904 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6091104ce1e551dc0ce561420dd7ef1c2be17ebdc15264cc7b5bdd6ad1df480a\": container with ID starting with 6091104ce1e551dc0ce561420dd7ef1c2be17ebdc15264cc7b5bdd6ad1df480a not found: ID does not exist" containerID="6091104ce1e551dc0ce561420dd7ef1c2be17ebdc15264cc7b5bdd6ad1df480a" Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.691941 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6091104ce1e551dc0ce561420dd7ef1c2be17ebdc15264cc7b5bdd6ad1df480a"} err="failed to get container status \"6091104ce1e551dc0ce561420dd7ef1c2be17ebdc15264cc7b5bdd6ad1df480a\": rpc error: code = NotFound desc = could not find container \"6091104ce1e551dc0ce561420dd7ef1c2be17ebdc15264cc7b5bdd6ad1df480a\": container with ID starting with 6091104ce1e551dc0ce561420dd7ef1c2be17ebdc15264cc7b5bdd6ad1df480a not found: ID does not exist" Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.691969 4955 scope.go:117] "RemoveContainer" containerID="06ab7cc50f20639e1bc6ccac95584134d4d0e9eb314f61ad43d893b580e6a6fa" Feb 02 14:00:14 crc kubenswrapper[4955]: E0202 14:00:14.692515 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06ab7cc50f20639e1bc6ccac95584134d4d0e9eb314f61ad43d893b580e6a6fa\": container with ID starting with 06ab7cc50f20639e1bc6ccac95584134d4d0e9eb314f61ad43d893b580e6a6fa not found: ID does not exist" containerID="06ab7cc50f20639e1bc6ccac95584134d4d0e9eb314f61ad43d893b580e6a6fa" Feb 02 14:00:14 crc kubenswrapper[4955]: I0202 14:00:14.692547 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06ab7cc50f20639e1bc6ccac95584134d4d0e9eb314f61ad43d893b580e6a6fa"} err="failed to get container status \"06ab7cc50f20639e1bc6ccac95584134d4d0e9eb314f61ad43d893b580e6a6fa\": rpc error: code = NotFound desc = could not find container \"06ab7cc50f20639e1bc6ccac95584134d4d0e9eb314f61ad43d893b580e6a6fa\": container with ID starting with 06ab7cc50f20639e1bc6ccac95584134d4d0e9eb314f61ad43d893b580e6a6fa not found: ID does not exist" Feb 02 14:00:15 crc kubenswrapper[4955]: I0202 14:00:15.727884 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b765d4b-5556-4eca-9ada-c9b85d669d94" path="/var/lib/kubelet/pods/7b765d4b-5556-4eca-9ada-c9b85d669d94/volumes" Feb 02 14:00:40 crc kubenswrapper[4955]: I0202 14:00:40.160014 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx_0e5f1bee-07dd-4eaf-9a3b-328845abb141/manager/0.log" Feb 02 14:00:41 crc kubenswrapper[4955]: I0202 14:00:41.672937 4955 scope.go:117] "RemoveContainer" containerID="f95eca64b1cc15ad05af7ef970ad0a6db4b37b5fc97b1f7f24a8c2a6a5006f88" Feb 02 14:00:43 crc kubenswrapper[4955]: I0202 14:00:43.322380 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 14:00:43 crc kubenswrapper[4955]: I0202 14:00:43.322980 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2fc665e1-000e-4aa8-b5e6-08b153ab82c3" containerName="prometheus" containerID="cri-o://6584d9849250eb322e4bab36475d055d3e9a8256a7b13e26745cb7fc8fbc29ff" gracePeriod=600 Feb 02 14:00:43 crc kubenswrapper[4955]: I0202 14:00:43.323150 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2fc665e1-000e-4aa8-b5e6-08b153ab82c3" containerName="thanos-sidecar" containerID="cri-o://eb4d85218d6a88798d69ed53171a2c219741b239f9a3c5d14a857759663c0664" gracePeriod=600 Feb 02 14:00:43 crc kubenswrapper[4955]: I0202 14:00:43.323231 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2fc665e1-000e-4aa8-b5e6-08b153ab82c3" containerName="config-reloader" containerID="cri-o://bd0c84d96a0adc55237201ea0fa51744b80262c5c4af413dafa78ba5e505999c" gracePeriod=600 Feb 02 14:00:43 crc kubenswrapper[4955]: I0202 14:00:43.846936 4955 generic.go:334] "Generic (PLEG): container finished" podID="2fc665e1-000e-4aa8-b5e6-08b153ab82c3" containerID="eb4d85218d6a88798d69ed53171a2c219741b239f9a3c5d14a857759663c0664" exitCode=0 Feb 02 14:00:43 crc kubenswrapper[4955]: I0202 14:00:43.847227 4955 generic.go:334] "Generic (PLEG): container finished" podID="2fc665e1-000e-4aa8-b5e6-08b153ab82c3" containerID="bd0c84d96a0adc55237201ea0fa51744b80262c5c4af413dafa78ba5e505999c" exitCode=0 Feb 02 14:00:43 crc kubenswrapper[4955]: I0202 14:00:43.847242 4955 generic.go:334] "Generic (PLEG): container finished" podID="2fc665e1-000e-4aa8-b5e6-08b153ab82c3" containerID="6584d9849250eb322e4bab36475d055d3e9a8256a7b13e26745cb7fc8fbc29ff" exitCode=0 Feb 02 14:00:43 crc kubenswrapper[4955]: I0202 14:00:43.847146 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2fc665e1-000e-4aa8-b5e6-08b153ab82c3","Type":"ContainerDied","Data":"eb4d85218d6a88798d69ed53171a2c219741b239f9a3c5d14a857759663c0664"} Feb 02 14:00:43 crc kubenswrapper[4955]: I0202 14:00:43.847283 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2fc665e1-000e-4aa8-b5e6-08b153ab82c3","Type":"ContainerDied","Data":"bd0c84d96a0adc55237201ea0fa51744b80262c5c4af413dafa78ba5e505999c"} Feb 02 14:00:43 crc kubenswrapper[4955]: I0202 14:00:43.847299 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2fc665e1-000e-4aa8-b5e6-08b153ab82c3","Type":"ContainerDied","Data":"6584d9849250eb322e4bab36475d055d3e9a8256a7b13e26745cb7fc8fbc29ff"} Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.362946 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.454628 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.454700 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-web-config\") pod \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.454779 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.454853 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-secret-combined-ca-bundle\") pod \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.454908 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-config\") pod \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.454934 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-rulefiles-2\") pod \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.454986 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-db\") pod \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.455065 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqpnz\" (UniqueName: \"kubernetes.io/projected/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-kube-api-access-qqpnz\") pod \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.455113 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-config-out\") pod \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.455170 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-thanos-prometheus-http-client-file\") pod \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.455223 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-rulefiles-1\") pod \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.455308 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-rulefiles-0\") pod \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.455337 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-tls-assets\") pod \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\" (UID: \"2fc665e1-000e-4aa8-b5e6-08b153ab82c3\") " Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.455459 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "2fc665e1-000e-4aa8-b5e6-08b153ab82c3" (UID: "2fc665e1-000e-4aa8-b5e6-08b153ab82c3"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.455835 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "2fc665e1-000e-4aa8-b5e6-08b153ab82c3" (UID: "2fc665e1-000e-4aa8-b5e6-08b153ab82c3"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.455947 4955 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.455990 4955 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.456188 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "2fc665e1-000e-4aa8-b5e6-08b153ab82c3" (UID: "2fc665e1-000e-4aa8-b5e6-08b153ab82c3"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.456415 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-db" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "2fc665e1-000e-4aa8-b5e6-08b153ab82c3" (UID: "2fc665e1-000e-4aa8-b5e6-08b153ab82c3"). InnerVolumeSpecName "prometheus-metric-storage-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.462021 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "2fc665e1-000e-4aa8-b5e6-08b153ab82c3" (UID: "2fc665e1-000e-4aa8-b5e6-08b153ab82c3"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.462064 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-kube-api-access-qqpnz" (OuterVolumeSpecName: "kube-api-access-qqpnz") pod "2fc665e1-000e-4aa8-b5e6-08b153ab82c3" (UID: "2fc665e1-000e-4aa8-b5e6-08b153ab82c3"). InnerVolumeSpecName "kube-api-access-qqpnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.473882 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "2fc665e1-000e-4aa8-b5e6-08b153ab82c3" (UID: "2fc665e1-000e-4aa8-b5e6-08b153ab82c3"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.473904 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-config" (OuterVolumeSpecName: "config") pod "2fc665e1-000e-4aa8-b5e6-08b153ab82c3" (UID: "2fc665e1-000e-4aa8-b5e6-08b153ab82c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.473936 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "2fc665e1-000e-4aa8-b5e6-08b153ab82c3" (UID: "2fc665e1-000e-4aa8-b5e6-08b153ab82c3"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.473961 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "2fc665e1-000e-4aa8-b5e6-08b153ab82c3" (UID: "2fc665e1-000e-4aa8-b5e6-08b153ab82c3"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.474001 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "2fc665e1-000e-4aa8-b5e6-08b153ab82c3" (UID: "2fc665e1-000e-4aa8-b5e6-08b153ab82c3"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.474900 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-config-out" (OuterVolumeSpecName: "config-out") pod "2fc665e1-000e-4aa8-b5e6-08b153ab82c3" (UID: "2fc665e1-000e-4aa8-b5e6-08b153ab82c3"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.558407 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqpnz\" (UniqueName: \"kubernetes.io/projected/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-kube-api-access-qqpnz\") on node \"crc\" DevicePath \"\"" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.558445 4955 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-config-out\") on node \"crc\" DevicePath \"\"" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.558458 4955 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.558473 4955 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.558485 4955 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.558499 4955 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.558514 4955 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.558531 4955 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.558549 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.558583 4955 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-prometheus-metric-storage-db\") on node \"crc\" DevicePath \"\"" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.559191 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-web-config" (OuterVolumeSpecName: "web-config") pod "2fc665e1-000e-4aa8-b5e6-08b153ab82c3" (UID: "2fc665e1-000e-4aa8-b5e6-08b153ab82c3"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.660439 4955 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2fc665e1-000e-4aa8-b5e6-08b153ab82c3-web-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.858694 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2fc665e1-000e-4aa8-b5e6-08b153ab82c3","Type":"ContainerDied","Data":"371c5b469f9e1e44338ef48a169f8f20c21847bc92baa5f5480d71bef5fcdacd"} Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.858996 4955 scope.go:117] "RemoveContainer" containerID="eb4d85218d6a88798d69ed53171a2c219741b239f9a3c5d14a857759663c0664" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.858786 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.887915 4955 scope.go:117] "RemoveContainer" containerID="bd0c84d96a0adc55237201ea0fa51744b80262c5c4af413dafa78ba5e505999c" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.911762 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.920537 4955 scope.go:117] "RemoveContainer" containerID="6584d9849250eb322e4bab36475d055d3e9a8256a7b13e26745cb7fc8fbc29ff" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.931812 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.953607 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 14:00:44 crc kubenswrapper[4955]: E0202 14:00:44.954101 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc665e1-000e-4aa8-b5e6-08b153ab82c3" containerName="config-reloader" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.954131 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc665e1-000e-4aa8-b5e6-08b153ab82c3" containerName="config-reloader" Feb 02 14:00:44 crc kubenswrapper[4955]: E0202 14:00:44.954159 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b765d4b-5556-4eca-9ada-c9b85d669d94" containerName="extract-content" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.954170 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b765d4b-5556-4eca-9ada-c9b85d669d94" containerName="extract-content" Feb 02 14:00:44 crc kubenswrapper[4955]: E0202 14:00:44.954306 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc665e1-000e-4aa8-b5e6-08b153ab82c3" containerName="thanos-sidecar" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.954319 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc665e1-000e-4aa8-b5e6-08b153ab82c3" containerName="thanos-sidecar" Feb 02 14:00:44 crc kubenswrapper[4955]: E0202 14:00:44.954338 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c731bf44-2d70-4227-b7f1-87fc32a0b596" containerName="collect-profiles" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.954346 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="c731bf44-2d70-4227-b7f1-87fc32a0b596" containerName="collect-profiles" Feb 02 14:00:44 crc kubenswrapper[4955]: E0202 14:00:44.954365 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc665e1-000e-4aa8-b5e6-08b153ab82c3" containerName="prometheus" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.954377 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc665e1-000e-4aa8-b5e6-08b153ab82c3" containerName="prometheus" Feb 02 14:00:44 crc kubenswrapper[4955]: E0202 14:00:44.954388 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b765d4b-5556-4eca-9ada-c9b85d669d94" containerName="extract-utilities" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.954396 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b765d4b-5556-4eca-9ada-c9b85d669d94" containerName="extract-utilities" Feb 02 14:00:44 crc kubenswrapper[4955]: E0202 14:00:44.954417 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b765d4b-5556-4eca-9ada-c9b85d669d94" containerName="registry-server" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.954424 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b765d4b-5556-4eca-9ada-c9b85d669d94" containerName="registry-server" Feb 02 14:00:44 crc kubenswrapper[4955]: E0202 14:00:44.954440 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc665e1-000e-4aa8-b5e6-08b153ab82c3" containerName="init-config-reloader" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.954448 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc665e1-000e-4aa8-b5e6-08b153ab82c3" containerName="init-config-reloader" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.954704 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="c731bf44-2d70-4227-b7f1-87fc32a0b596" containerName="collect-profiles" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.954725 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fc665e1-000e-4aa8-b5e6-08b153ab82c3" containerName="config-reloader" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.954737 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fc665e1-000e-4aa8-b5e6-08b153ab82c3" containerName="prometheus" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.954754 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fc665e1-000e-4aa8-b5e6-08b153ab82c3" containerName="thanos-sidecar" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.954775 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b765d4b-5556-4eca-9ada-c9b85d669d94" containerName="registry-server" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.957033 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.962044 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-pv5nd" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.962310 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.962526 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.962698 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.962758 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.962713 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.962913 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.967789 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 02 14:00:44 crc kubenswrapper[4955]: I0202 14:00:44.968428 4955 scope.go:117] "RemoveContainer" containerID="dc1fcb7a31e0a9036bf3648866ff9ba4e1c4032666b36eb4219ea84b30e94d91" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.017061 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.022875 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.069063 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/834ba568-0b9b-45d2-8479-1d96bcb78803-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.069189 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/834ba568-0b9b-45d2-8479-1d96bcb78803-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.069264 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/834ba568-0b9b-45d2-8479-1d96bcb78803-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.069306 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/834ba568-0b9b-45d2-8479-1d96bcb78803-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.069338 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/834ba568-0b9b-45d2-8479-1d96bcb78803-config\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.069592 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/834ba568-0b9b-45d2-8479-1d96bcb78803-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.069621 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/834ba568-0b9b-45d2-8479-1d96bcb78803-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.072713 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/834ba568-0b9b-45d2-8479-1d96bcb78803-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.073597 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g4mn\" (UniqueName: \"kubernetes.io/projected/834ba568-0b9b-45d2-8479-1d96bcb78803-kube-api-access-8g4mn\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.073653 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/834ba568-0b9b-45d2-8479-1d96bcb78803-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.073675 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/834ba568-0b9b-45d2-8479-1d96bcb78803-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.073705 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/834ba568-0b9b-45d2-8479-1d96bcb78803-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.073790 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/834ba568-0b9b-45d2-8479-1d96bcb78803-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.176055 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/834ba568-0b9b-45d2-8479-1d96bcb78803-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.176119 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/834ba568-0b9b-45d2-8479-1d96bcb78803-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.176157 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/834ba568-0b9b-45d2-8479-1d96bcb78803-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.176184 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/834ba568-0b9b-45d2-8479-1d96bcb78803-config\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.176240 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/834ba568-0b9b-45d2-8479-1d96bcb78803-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.176266 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/834ba568-0b9b-45d2-8479-1d96bcb78803-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.176299 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/834ba568-0b9b-45d2-8479-1d96bcb78803-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.176369 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g4mn\" (UniqueName: \"kubernetes.io/projected/834ba568-0b9b-45d2-8479-1d96bcb78803-kube-api-access-8g4mn\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.176404 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/834ba568-0b9b-45d2-8479-1d96bcb78803-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.176424 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/834ba568-0b9b-45d2-8479-1d96bcb78803-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.176448 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/834ba568-0b9b-45d2-8479-1d96bcb78803-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.176515 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/834ba568-0b9b-45d2-8479-1d96bcb78803-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.176547 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/834ba568-0b9b-45d2-8479-1d96bcb78803-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.176972 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/834ba568-0b9b-45d2-8479-1d96bcb78803-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.177355 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/834ba568-0b9b-45d2-8479-1d96bcb78803-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.177608 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/834ba568-0b9b-45d2-8479-1d96bcb78803-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.178494 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/834ba568-0b9b-45d2-8479-1d96bcb78803-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.183032 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/834ba568-0b9b-45d2-8479-1d96bcb78803-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.183238 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/834ba568-0b9b-45d2-8479-1d96bcb78803-config\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.197440 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/834ba568-0b9b-45d2-8479-1d96bcb78803-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.197781 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g4mn\" (UniqueName: \"kubernetes.io/projected/834ba568-0b9b-45d2-8479-1d96bcb78803-kube-api-access-8g4mn\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.197488 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/834ba568-0b9b-45d2-8479-1d96bcb78803-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.198685 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/834ba568-0b9b-45d2-8479-1d96bcb78803-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.200255 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/834ba568-0b9b-45d2-8479-1d96bcb78803-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.201253 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/834ba568-0b9b-45d2-8479-1d96bcb78803-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.221262 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/834ba568-0b9b-45d2-8479-1d96bcb78803-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"834ba568-0b9b-45d2-8479-1d96bcb78803\") " pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.302938 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 14:00:45 crc kubenswrapper[4955]: I0202 14:00:45.742723 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fc665e1-000e-4aa8-b5e6-08b153ab82c3" path="/var/lib/kubelet/pods/2fc665e1-000e-4aa8-b5e6-08b153ab82c3/volumes" Feb 02 14:00:46 crc kubenswrapper[4955]: I0202 14:00:46.551116 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 14:00:46 crc kubenswrapper[4955]: W0202 14:00:46.552011 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod834ba568_0b9b_45d2_8479_1d96bcb78803.slice/crio-6297540d66832794e9f0379953f6e6c9bc2f5877795d6ca32cb4009231089057 WatchSource:0}: Error finding container 6297540d66832794e9f0379953f6e6c9bc2f5877795d6ca32cb4009231089057: Status 404 returned error can't find the container with id 6297540d66832794e9f0379953f6e6c9bc2f5877795d6ca32cb4009231089057 Feb 02 14:00:46 crc kubenswrapper[4955]: I0202 14:00:46.880522 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"834ba568-0b9b-45d2-8479-1d96bcb78803","Type":"ContainerStarted","Data":"6297540d66832794e9f0379953f6e6c9bc2f5877795d6ca32cb4009231089057"} Feb 02 14:00:49 crc kubenswrapper[4955]: I0202 14:00:49.911637 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"834ba568-0b9b-45d2-8479-1d96bcb78803","Type":"ContainerStarted","Data":"7b8ee11f36df55425ecd8da74d2ab943258083445afe8808c7bdc60ae634c07e"} Feb 02 14:00:56 crc kubenswrapper[4955]: I0202 14:00:56.979727 4955 generic.go:334] "Generic (PLEG): container finished" podID="834ba568-0b9b-45d2-8479-1d96bcb78803" containerID="7b8ee11f36df55425ecd8da74d2ab943258083445afe8808c7bdc60ae634c07e" exitCode=0 Feb 02 14:00:56 crc kubenswrapper[4955]: I0202 14:00:56.979819 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"834ba568-0b9b-45d2-8479-1d96bcb78803","Type":"ContainerDied","Data":"7b8ee11f36df55425ecd8da74d2ab943258083445afe8808c7bdc60ae634c07e"} Feb 02 14:00:57 crc kubenswrapper[4955]: I0202 14:00:57.991043 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"834ba568-0b9b-45d2-8479-1d96bcb78803","Type":"ContainerStarted","Data":"ac13059384eaf3cd6b05bea21c7487a7b0ed5187d76a6bef8b6ef67b4bf1e903"} Feb 02 14:01:00 crc kubenswrapper[4955]: I0202 14:01:00.159550 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29500681-j9g6c"] Feb 02 14:01:00 crc kubenswrapper[4955]: I0202 14:01:00.164652 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500681-j9g6c" Feb 02 14:01:00 crc kubenswrapper[4955]: I0202 14:01:00.180252 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500681-j9g6c"] Feb 02 14:01:00 crc kubenswrapper[4955]: I0202 14:01:00.261217 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f2b1c6-35da-4e81-9161-265f0009f0e0-combined-ca-bundle\") pod \"keystone-cron-29500681-j9g6c\" (UID: \"d1f2b1c6-35da-4e81-9161-265f0009f0e0\") " pod="openstack/keystone-cron-29500681-j9g6c" Feb 02 14:01:00 crc kubenswrapper[4955]: I0202 14:01:00.261306 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d1f2b1c6-35da-4e81-9161-265f0009f0e0-fernet-keys\") pod \"keystone-cron-29500681-j9g6c\" (UID: \"d1f2b1c6-35da-4e81-9161-265f0009f0e0\") " pod="openstack/keystone-cron-29500681-j9g6c" Feb 02 14:01:00 crc kubenswrapper[4955]: I0202 14:01:00.261336 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr7dt\" (UniqueName: \"kubernetes.io/projected/d1f2b1c6-35da-4e81-9161-265f0009f0e0-kube-api-access-fr7dt\") pod \"keystone-cron-29500681-j9g6c\" (UID: \"d1f2b1c6-35da-4e81-9161-265f0009f0e0\") " pod="openstack/keystone-cron-29500681-j9g6c" Feb 02 14:01:00 crc kubenswrapper[4955]: I0202 14:01:00.261366 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f2b1c6-35da-4e81-9161-265f0009f0e0-config-data\") pod \"keystone-cron-29500681-j9g6c\" (UID: \"d1f2b1c6-35da-4e81-9161-265f0009f0e0\") " pod="openstack/keystone-cron-29500681-j9g6c" Feb 02 14:01:00 crc kubenswrapper[4955]: I0202 14:01:00.363185 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f2b1c6-35da-4e81-9161-265f0009f0e0-combined-ca-bundle\") pod \"keystone-cron-29500681-j9g6c\" (UID: \"d1f2b1c6-35da-4e81-9161-265f0009f0e0\") " pod="openstack/keystone-cron-29500681-j9g6c" Feb 02 14:01:00 crc kubenswrapper[4955]: I0202 14:01:00.363258 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d1f2b1c6-35da-4e81-9161-265f0009f0e0-fernet-keys\") pod \"keystone-cron-29500681-j9g6c\" (UID: \"d1f2b1c6-35da-4e81-9161-265f0009f0e0\") " pod="openstack/keystone-cron-29500681-j9g6c" Feb 02 14:01:00 crc kubenswrapper[4955]: I0202 14:01:00.363335 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr7dt\" (UniqueName: \"kubernetes.io/projected/d1f2b1c6-35da-4e81-9161-265f0009f0e0-kube-api-access-fr7dt\") pod \"keystone-cron-29500681-j9g6c\" (UID: \"d1f2b1c6-35da-4e81-9161-265f0009f0e0\") " pod="openstack/keystone-cron-29500681-j9g6c" Feb 02 14:01:00 crc kubenswrapper[4955]: I0202 14:01:00.363366 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f2b1c6-35da-4e81-9161-265f0009f0e0-config-data\") pod \"keystone-cron-29500681-j9g6c\" (UID: \"d1f2b1c6-35da-4e81-9161-265f0009f0e0\") " pod="openstack/keystone-cron-29500681-j9g6c" Feb 02 14:01:00 crc kubenswrapper[4955]: I0202 14:01:00.370019 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f2b1c6-35da-4e81-9161-265f0009f0e0-config-data\") pod \"keystone-cron-29500681-j9g6c\" (UID: \"d1f2b1c6-35da-4e81-9161-265f0009f0e0\") " pod="openstack/keystone-cron-29500681-j9g6c" Feb 02 14:01:00 crc kubenswrapper[4955]: I0202 14:01:00.370332 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d1f2b1c6-35da-4e81-9161-265f0009f0e0-fernet-keys\") pod \"keystone-cron-29500681-j9g6c\" (UID: \"d1f2b1c6-35da-4e81-9161-265f0009f0e0\") " pod="openstack/keystone-cron-29500681-j9g6c" Feb 02 14:01:00 crc kubenswrapper[4955]: I0202 14:01:00.372363 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f2b1c6-35da-4e81-9161-265f0009f0e0-combined-ca-bundle\") pod \"keystone-cron-29500681-j9g6c\" (UID: \"d1f2b1c6-35da-4e81-9161-265f0009f0e0\") " pod="openstack/keystone-cron-29500681-j9g6c" Feb 02 14:01:00 crc kubenswrapper[4955]: I0202 14:01:00.382849 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr7dt\" (UniqueName: \"kubernetes.io/projected/d1f2b1c6-35da-4e81-9161-265f0009f0e0-kube-api-access-fr7dt\") pod \"keystone-cron-29500681-j9g6c\" (UID: \"d1f2b1c6-35da-4e81-9161-265f0009f0e0\") " pod="openstack/keystone-cron-29500681-j9g6c" Feb 02 14:01:00 crc kubenswrapper[4955]: I0202 14:01:00.494307 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500681-j9g6c" Feb 02 14:01:00 crc kubenswrapper[4955]: I0202 14:01:00.934819 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500681-j9g6c"] Feb 02 14:01:00 crc kubenswrapper[4955]: W0202 14:01:00.936742 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1f2b1c6_35da_4e81_9161_265f0009f0e0.slice/crio-f4dfc3eee3473148d0498ff165d05955d270d4234306ea0019cb743047f80535 WatchSource:0}: Error finding container f4dfc3eee3473148d0498ff165d05955d270d4234306ea0019cb743047f80535: Status 404 returned error can't find the container with id f4dfc3eee3473148d0498ff165d05955d270d4234306ea0019cb743047f80535 Feb 02 14:01:01 crc kubenswrapper[4955]: I0202 14:01:01.024841 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500681-j9g6c" event={"ID":"d1f2b1c6-35da-4e81-9161-265f0009f0e0","Type":"ContainerStarted","Data":"f4dfc3eee3473148d0498ff165d05955d270d4234306ea0019cb743047f80535"} Feb 02 14:01:01 crc kubenswrapper[4955]: I0202 14:01:01.027925 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"834ba568-0b9b-45d2-8479-1d96bcb78803","Type":"ContainerStarted","Data":"bb62f614166149f5406a3fbd1d3215468d7510147511789a88d939fb722b7c33"} Feb 02 14:01:01 crc kubenswrapper[4955]: I0202 14:01:01.027956 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"834ba568-0b9b-45d2-8479-1d96bcb78803","Type":"ContainerStarted","Data":"175f42354d3d16a03f0ab7b4b78d756eb656b654b97449d6333f49df325414ed"} Feb 02 14:01:01 crc kubenswrapper[4955]: I0202 14:01:01.056541 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.056318182 podStartE2EDuration="17.056318182s" podCreationTimestamp="2026-02-02 14:00:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:01:01.050810378 +0000 UTC m=+3511.963146848" watchObservedRunningTime="2026-02-02 14:01:01.056318182 +0000 UTC m=+3511.968654632" Feb 02 14:01:02 crc kubenswrapper[4955]: I0202 14:01:02.036857 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500681-j9g6c" event={"ID":"d1f2b1c6-35da-4e81-9161-265f0009f0e0","Type":"ContainerStarted","Data":"13ca4692f0cba1305d215e5ee4b13a7358894923204ff4b39efb09d0b64b1085"} Feb 02 14:01:03 crc kubenswrapper[4955]: I0202 14:01:03.016962 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:01:03 crc kubenswrapper[4955]: I0202 14:01:03.017311 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:01:04 crc kubenswrapper[4955]: I0202 14:01:04.057139 4955 generic.go:334] "Generic (PLEG): container finished" podID="d1f2b1c6-35da-4e81-9161-265f0009f0e0" containerID="13ca4692f0cba1305d215e5ee4b13a7358894923204ff4b39efb09d0b64b1085" exitCode=0 Feb 02 14:01:04 crc kubenswrapper[4955]: I0202 14:01:04.057184 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500681-j9g6c" event={"ID":"d1f2b1c6-35da-4e81-9161-265f0009f0e0","Type":"ContainerDied","Data":"13ca4692f0cba1305d215e5ee4b13a7358894923204ff4b39efb09d0b64b1085"} Feb 02 14:01:05 crc kubenswrapper[4955]: I0202 14:01:05.304050 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 02 14:01:05 crc kubenswrapper[4955]: I0202 14:01:05.366679 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500681-j9g6c" Feb 02 14:01:05 crc kubenswrapper[4955]: I0202 14:01:05.564416 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d1f2b1c6-35da-4e81-9161-265f0009f0e0-fernet-keys\") pod \"d1f2b1c6-35da-4e81-9161-265f0009f0e0\" (UID: \"d1f2b1c6-35da-4e81-9161-265f0009f0e0\") " Feb 02 14:01:05 crc kubenswrapper[4955]: I0202 14:01:05.564831 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr7dt\" (UniqueName: \"kubernetes.io/projected/d1f2b1c6-35da-4e81-9161-265f0009f0e0-kube-api-access-fr7dt\") pod \"d1f2b1c6-35da-4e81-9161-265f0009f0e0\" (UID: \"d1f2b1c6-35da-4e81-9161-265f0009f0e0\") " Feb 02 14:01:05 crc kubenswrapper[4955]: I0202 14:01:05.564874 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f2b1c6-35da-4e81-9161-265f0009f0e0-config-data\") pod \"d1f2b1c6-35da-4e81-9161-265f0009f0e0\" (UID: \"d1f2b1c6-35da-4e81-9161-265f0009f0e0\") " Feb 02 14:01:05 crc kubenswrapper[4955]: I0202 14:01:05.564981 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f2b1c6-35da-4e81-9161-265f0009f0e0-combined-ca-bundle\") pod \"d1f2b1c6-35da-4e81-9161-265f0009f0e0\" (UID: \"d1f2b1c6-35da-4e81-9161-265f0009f0e0\") " Feb 02 14:01:05 crc kubenswrapper[4955]: I0202 14:01:05.569616 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1f2b1c6-35da-4e81-9161-265f0009f0e0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d1f2b1c6-35da-4e81-9161-265f0009f0e0" (UID: "d1f2b1c6-35da-4e81-9161-265f0009f0e0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:01:05 crc kubenswrapper[4955]: I0202 14:01:05.570155 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1f2b1c6-35da-4e81-9161-265f0009f0e0-kube-api-access-fr7dt" (OuterVolumeSpecName: "kube-api-access-fr7dt") pod "d1f2b1c6-35da-4e81-9161-265f0009f0e0" (UID: "d1f2b1c6-35da-4e81-9161-265f0009f0e0"). InnerVolumeSpecName "kube-api-access-fr7dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:01:05 crc kubenswrapper[4955]: I0202 14:01:05.592412 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1f2b1c6-35da-4e81-9161-265f0009f0e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1f2b1c6-35da-4e81-9161-265f0009f0e0" (UID: "d1f2b1c6-35da-4e81-9161-265f0009f0e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:01:05 crc kubenswrapper[4955]: I0202 14:01:05.613460 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1f2b1c6-35da-4e81-9161-265f0009f0e0-config-data" (OuterVolumeSpecName: "config-data") pod "d1f2b1c6-35da-4e81-9161-265f0009f0e0" (UID: "d1f2b1c6-35da-4e81-9161-265f0009f0e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:01:05 crc kubenswrapper[4955]: I0202 14:01:05.667989 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f2b1c6-35da-4e81-9161-265f0009f0e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:01:05 crc kubenswrapper[4955]: I0202 14:01:05.668042 4955 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d1f2b1c6-35da-4e81-9161-265f0009f0e0-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 14:01:05 crc kubenswrapper[4955]: I0202 14:01:05.668056 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr7dt\" (UniqueName: \"kubernetes.io/projected/d1f2b1c6-35da-4e81-9161-265f0009f0e0-kube-api-access-fr7dt\") on node \"crc\" DevicePath \"\"" Feb 02 14:01:05 crc kubenswrapper[4955]: I0202 14:01:05.668073 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f2b1c6-35da-4e81-9161-265f0009f0e0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:01:06 crc kubenswrapper[4955]: I0202 14:01:06.078189 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500681-j9g6c" event={"ID":"d1f2b1c6-35da-4e81-9161-265f0009f0e0","Type":"ContainerDied","Data":"f4dfc3eee3473148d0498ff165d05955d270d4234306ea0019cb743047f80535"} Feb 02 14:01:06 crc kubenswrapper[4955]: I0202 14:01:06.078239 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4dfc3eee3473148d0498ff165d05955d270d4234306ea0019cb743047f80535" Feb 02 14:01:06 crc kubenswrapper[4955]: I0202 14:01:06.078271 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500681-j9g6c" Feb 02 14:01:15 crc kubenswrapper[4955]: I0202 14:01:15.303653 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 02 14:01:15 crc kubenswrapper[4955]: I0202 14:01:15.310146 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 02 14:01:16 crc kubenswrapper[4955]: I0202 14:01:16.170397 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 02 14:01:33 crc kubenswrapper[4955]: I0202 14:01:33.017151 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:01:33 crc kubenswrapper[4955]: I0202 14:01:33.017744 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:02:03 crc kubenswrapper[4955]: I0202 14:02:03.017268 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:02:03 crc kubenswrapper[4955]: I0202 14:02:03.019178 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:02:03 crc kubenswrapper[4955]: I0202 14:02:03.019261 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 14:02:03 crc kubenswrapper[4955]: I0202 14:02:03.020897 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3"} pod="openshift-machine-config-operator/machine-config-daemon-6l62h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:02:03 crc kubenswrapper[4955]: I0202 14:02:03.020974 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" containerID="cri-o://6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" gracePeriod=600 Feb 02 14:02:03 crc kubenswrapper[4955]: E0202 14:02:03.153643 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:02:03 crc kubenswrapper[4955]: I0202 14:02:03.592576 4955 generic.go:334] "Generic (PLEG): container finished" podID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" exitCode=0 Feb 02 14:02:03 crc kubenswrapper[4955]: I0202 14:02:03.592613 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerDied","Data":"6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3"} Feb 02 14:02:03 crc kubenswrapper[4955]: I0202 14:02:03.592699 4955 scope.go:117] "RemoveContainer" containerID="a885561f2dbf1d8ee032bf28e04cb51494deebc8003c906669492185d54028be" Feb 02 14:02:03 crc kubenswrapper[4955]: I0202 14:02:03.593628 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:02:03 crc kubenswrapper[4955]: E0202 14:02:03.593968 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:02:16 crc kubenswrapper[4955]: I0202 14:02:16.716785 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:02:16 crc kubenswrapper[4955]: E0202 14:02:16.717886 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:02:27 crc kubenswrapper[4955]: I0202 14:02:27.716533 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:02:27 crc kubenswrapper[4955]: E0202 14:02:27.717543 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:02:42 crc kubenswrapper[4955]: I0202 14:02:42.716625 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:02:42 crc kubenswrapper[4955]: E0202 14:02:42.717376 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:02:57 crc kubenswrapper[4955]: I0202 14:02:57.716264 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:02:57 crc kubenswrapper[4955]: E0202 14:02:57.717160 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:03:08 crc kubenswrapper[4955]: I0202 14:03:08.716545 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:03:08 crc kubenswrapper[4955]: E0202 14:03:08.717290 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:03:19 crc kubenswrapper[4955]: I0202 14:03:19.724234 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:03:19 crc kubenswrapper[4955]: E0202 14:03:19.725064 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:03:31 crc kubenswrapper[4955]: I0202 14:03:31.717088 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:03:31 crc kubenswrapper[4955]: E0202 14:03:31.717961 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:03:45 crc kubenswrapper[4955]: I0202 14:03:45.716762 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:03:45 crc kubenswrapper[4955]: E0202 14:03:45.717589 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:03:59 crc kubenswrapper[4955]: I0202 14:03:59.724109 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:03:59 crc kubenswrapper[4955]: E0202 14:03:59.724833 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:04:12 crc kubenswrapper[4955]: I0202 14:04:12.716231 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:04:12 crc kubenswrapper[4955]: E0202 14:04:12.717447 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:04:23 crc kubenswrapper[4955]: I0202 14:04:23.716871 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:04:23 crc kubenswrapper[4955]: E0202 14:04:23.718958 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:04:37 crc kubenswrapper[4955]: I0202 14:04:37.716184 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:04:37 crc kubenswrapper[4955]: E0202 14:04:37.716990 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:04:42 crc kubenswrapper[4955]: I0202 14:04:42.951451 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx_0e5f1bee-07dd-4eaf-9a3b-328845abb141/manager/0.log" Feb 02 14:04:50 crc kubenswrapper[4955]: I0202 14:04:50.716701 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:04:50 crc kubenswrapper[4955]: E0202 14:04:50.717400 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:05:05 crc kubenswrapper[4955]: I0202 14:05:05.716632 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:05:05 crc kubenswrapper[4955]: E0202 14:05:05.717608 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:05:17 crc kubenswrapper[4955]: I0202 14:05:17.716991 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:05:17 crc kubenswrapper[4955]: E0202 14:05:17.717983 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:05:18 crc kubenswrapper[4955]: I0202 14:05:18.829400 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6bshf/must-gather-g9hv8"] Feb 02 14:05:18 crc kubenswrapper[4955]: E0202 14:05:18.830146 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f2b1c6-35da-4e81-9161-265f0009f0e0" containerName="keystone-cron" Feb 02 14:05:18 crc kubenswrapper[4955]: I0202 14:05:18.830161 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f2b1c6-35da-4e81-9161-265f0009f0e0" containerName="keystone-cron" Feb 02 14:05:18 crc kubenswrapper[4955]: I0202 14:05:18.830440 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1f2b1c6-35da-4e81-9161-265f0009f0e0" containerName="keystone-cron" Feb 02 14:05:18 crc kubenswrapper[4955]: I0202 14:05:18.831777 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bshf/must-gather-g9hv8" Feb 02 14:05:18 crc kubenswrapper[4955]: I0202 14:05:18.835838 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6bshf"/"kube-root-ca.crt" Feb 02 14:05:18 crc kubenswrapper[4955]: I0202 14:05:18.836033 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-6bshf"/"default-dockercfg-wnp8s" Feb 02 14:05:18 crc kubenswrapper[4955]: I0202 14:05:18.836317 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6bshf"/"openshift-service-ca.crt" Feb 02 14:05:18 crc kubenswrapper[4955]: I0202 14:05:18.848174 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6bshf/must-gather-g9hv8"] Feb 02 14:05:18 crc kubenswrapper[4955]: I0202 14:05:18.928850 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1-must-gather-output\") pod \"must-gather-g9hv8\" (UID: \"bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1\") " pod="openshift-must-gather-6bshf/must-gather-g9hv8" Feb 02 14:05:18 crc kubenswrapper[4955]: I0202 14:05:18.929361 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp892\" (UniqueName: \"kubernetes.io/projected/bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1-kube-api-access-jp892\") pod \"must-gather-g9hv8\" (UID: \"bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1\") " pod="openshift-must-gather-6bshf/must-gather-g9hv8" Feb 02 14:05:19 crc kubenswrapper[4955]: I0202 14:05:19.030916 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp892\" (UniqueName: \"kubernetes.io/projected/bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1-kube-api-access-jp892\") pod \"must-gather-g9hv8\" (UID: \"bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1\") " pod="openshift-must-gather-6bshf/must-gather-g9hv8" Feb 02 14:05:19 crc kubenswrapper[4955]: I0202 14:05:19.031007 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1-must-gather-output\") pod \"must-gather-g9hv8\" (UID: \"bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1\") " pod="openshift-must-gather-6bshf/must-gather-g9hv8" Feb 02 14:05:19 crc kubenswrapper[4955]: I0202 14:05:19.031616 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1-must-gather-output\") pod \"must-gather-g9hv8\" (UID: \"bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1\") " pod="openshift-must-gather-6bshf/must-gather-g9hv8" Feb 02 14:05:19 crc kubenswrapper[4955]: I0202 14:05:19.287960 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp892\" (UniqueName: \"kubernetes.io/projected/bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1-kube-api-access-jp892\") pod \"must-gather-g9hv8\" (UID: \"bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1\") " pod="openshift-must-gather-6bshf/must-gather-g9hv8" Feb 02 14:05:19 crc kubenswrapper[4955]: I0202 14:05:19.455684 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bshf/must-gather-g9hv8" Feb 02 14:05:19 crc kubenswrapper[4955]: I0202 14:05:19.934483 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6bshf/must-gather-g9hv8"] Feb 02 14:05:19 crc kubenswrapper[4955]: I0202 14:05:19.939353 4955 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 14:05:20 crc kubenswrapper[4955]: I0202 14:05:20.344048 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bshf/must-gather-g9hv8" event={"ID":"bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1","Type":"ContainerStarted","Data":"b3e5669891c7acf706e674a0c88aa7d55ed7056972263a2e89182bd8211357bd"} Feb 02 14:05:24 crc kubenswrapper[4955]: I0202 14:05:24.382511 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bshf/must-gather-g9hv8" event={"ID":"bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1","Type":"ContainerStarted","Data":"2a775b54f460c6ce9be125ab63bdcdfcc1d0f2ac30cca0af19e68c6c6108ad72"} Feb 02 14:05:24 crc kubenswrapper[4955]: I0202 14:05:24.383054 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bshf/must-gather-g9hv8" event={"ID":"bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1","Type":"ContainerStarted","Data":"843ebeebc7124f03620c2ce7d55dddc491ec926dd3cf1a53fbcaf6f0097ead3f"} Feb 02 14:05:24 crc kubenswrapper[4955]: I0202 14:05:24.421535 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6bshf/must-gather-g9hv8" podStartSLOduration=2.536317177 podStartE2EDuration="6.421514155s" podCreationTimestamp="2026-02-02 14:05:18 +0000 UTC" firstStartedPulling="2026-02-02 14:05:19.939093795 +0000 UTC m=+3770.851430245" lastFinishedPulling="2026-02-02 14:05:23.824290773 +0000 UTC m=+3774.736627223" observedRunningTime="2026-02-02 14:05:24.419545547 +0000 UTC m=+3775.331881997" watchObservedRunningTime="2026-02-02 14:05:24.421514155 +0000 UTC m=+3775.333850605" Feb 02 14:05:28 crc kubenswrapper[4955]: I0202 14:05:28.716534 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:05:28 crc kubenswrapper[4955]: E0202 14:05:28.717418 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:05:29 crc kubenswrapper[4955]: I0202 14:05:29.665149 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6bshf/crc-debug-wcrpp"] Feb 02 14:05:29 crc kubenswrapper[4955]: I0202 14:05:29.666743 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bshf/crc-debug-wcrpp" Feb 02 14:05:29 crc kubenswrapper[4955]: I0202 14:05:29.778048 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxbgg\" (UniqueName: \"kubernetes.io/projected/42bc98e9-ad98-4525-9758-1639ebbb4d4d-kube-api-access-qxbgg\") pod \"crc-debug-wcrpp\" (UID: \"42bc98e9-ad98-4525-9758-1639ebbb4d4d\") " pod="openshift-must-gather-6bshf/crc-debug-wcrpp" Feb 02 14:05:29 crc kubenswrapper[4955]: I0202 14:05:29.778229 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/42bc98e9-ad98-4525-9758-1639ebbb4d4d-host\") pod \"crc-debug-wcrpp\" (UID: \"42bc98e9-ad98-4525-9758-1639ebbb4d4d\") " pod="openshift-must-gather-6bshf/crc-debug-wcrpp" Feb 02 14:05:29 crc kubenswrapper[4955]: I0202 14:05:29.880603 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/42bc98e9-ad98-4525-9758-1639ebbb4d4d-host\") pod \"crc-debug-wcrpp\" (UID: \"42bc98e9-ad98-4525-9758-1639ebbb4d4d\") " pod="openshift-must-gather-6bshf/crc-debug-wcrpp" Feb 02 14:05:29 crc kubenswrapper[4955]: I0202 14:05:29.880847 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxbgg\" (UniqueName: \"kubernetes.io/projected/42bc98e9-ad98-4525-9758-1639ebbb4d4d-kube-api-access-qxbgg\") pod \"crc-debug-wcrpp\" (UID: \"42bc98e9-ad98-4525-9758-1639ebbb4d4d\") " pod="openshift-must-gather-6bshf/crc-debug-wcrpp" Feb 02 14:05:29 crc kubenswrapper[4955]: I0202 14:05:29.881726 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/42bc98e9-ad98-4525-9758-1639ebbb4d4d-host\") pod \"crc-debug-wcrpp\" (UID: \"42bc98e9-ad98-4525-9758-1639ebbb4d4d\") " pod="openshift-must-gather-6bshf/crc-debug-wcrpp" Feb 02 14:05:29 crc kubenswrapper[4955]: I0202 14:05:29.910580 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxbgg\" (UniqueName: \"kubernetes.io/projected/42bc98e9-ad98-4525-9758-1639ebbb4d4d-kube-api-access-qxbgg\") pod \"crc-debug-wcrpp\" (UID: \"42bc98e9-ad98-4525-9758-1639ebbb4d4d\") " pod="openshift-must-gather-6bshf/crc-debug-wcrpp" Feb 02 14:05:29 crc kubenswrapper[4955]: I0202 14:05:29.985951 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bshf/crc-debug-wcrpp" Feb 02 14:05:30 crc kubenswrapper[4955]: I0202 14:05:30.430762 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bshf/crc-debug-wcrpp" event={"ID":"42bc98e9-ad98-4525-9758-1639ebbb4d4d","Type":"ContainerStarted","Data":"006af745c5a0d58bb739cbdc46ac2cbcb24ebd3e7c2d1e36cec067230499a5bb"} Feb 02 14:05:41 crc kubenswrapper[4955]: I0202 14:05:41.760484 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:05:41 crc kubenswrapper[4955]: E0202 14:05:41.770249 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:05:43 crc kubenswrapper[4955]: I0202 14:05:43.792393 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bshf/crc-debug-wcrpp" event={"ID":"42bc98e9-ad98-4525-9758-1639ebbb4d4d","Type":"ContainerStarted","Data":"db60ea4e59d921f5209b41fe0edc8bfcf6e1aebdc9dbad06f0c0cb4eb8bb169f"} Feb 02 14:05:43 crc kubenswrapper[4955]: I0202 14:05:43.820244 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6bshf/crc-debug-wcrpp" podStartSLOduration=2.015658064 podStartE2EDuration="14.820223473s" podCreationTimestamp="2026-02-02 14:05:29 +0000 UTC" firstStartedPulling="2026-02-02 14:05:30.021512371 +0000 UTC m=+3780.933848821" lastFinishedPulling="2026-02-02 14:05:42.82607778 +0000 UTC m=+3793.738414230" observedRunningTime="2026-02-02 14:05:43.812620566 +0000 UTC m=+3794.724957036" watchObservedRunningTime="2026-02-02 14:05:43.820223473 +0000 UTC m=+3794.732559923" Feb 02 14:05:52 crc kubenswrapper[4955]: I0202 14:05:52.716887 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:05:52 crc kubenswrapper[4955]: E0202 14:05:52.717716 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:06:02 crc kubenswrapper[4955]: I0202 14:06:02.966097 4955 generic.go:334] "Generic (PLEG): container finished" podID="42bc98e9-ad98-4525-9758-1639ebbb4d4d" containerID="db60ea4e59d921f5209b41fe0edc8bfcf6e1aebdc9dbad06f0c0cb4eb8bb169f" exitCode=0 Feb 02 14:06:02 crc kubenswrapper[4955]: I0202 14:06:02.966183 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bshf/crc-debug-wcrpp" event={"ID":"42bc98e9-ad98-4525-9758-1639ebbb4d4d","Type":"ContainerDied","Data":"db60ea4e59d921f5209b41fe0edc8bfcf6e1aebdc9dbad06f0c0cb4eb8bb169f"} Feb 02 14:06:04 crc kubenswrapper[4955]: I0202 14:06:04.084336 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bshf/crc-debug-wcrpp" Feb 02 14:06:04 crc kubenswrapper[4955]: I0202 14:06:04.115660 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6bshf/crc-debug-wcrpp"] Feb 02 14:06:04 crc kubenswrapper[4955]: I0202 14:06:04.126735 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6bshf/crc-debug-wcrpp"] Feb 02 14:06:04 crc kubenswrapper[4955]: I0202 14:06:04.132757 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/42bc98e9-ad98-4525-9758-1639ebbb4d4d-host\") pod \"42bc98e9-ad98-4525-9758-1639ebbb4d4d\" (UID: \"42bc98e9-ad98-4525-9758-1639ebbb4d4d\") " Feb 02 14:06:04 crc kubenswrapper[4955]: I0202 14:06:04.132881 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42bc98e9-ad98-4525-9758-1639ebbb4d4d-host" (OuterVolumeSpecName: "host") pod "42bc98e9-ad98-4525-9758-1639ebbb4d4d" (UID: "42bc98e9-ad98-4525-9758-1639ebbb4d4d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:06:04 crc kubenswrapper[4955]: I0202 14:06:04.133039 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxbgg\" (UniqueName: \"kubernetes.io/projected/42bc98e9-ad98-4525-9758-1639ebbb4d4d-kube-api-access-qxbgg\") pod \"42bc98e9-ad98-4525-9758-1639ebbb4d4d\" (UID: \"42bc98e9-ad98-4525-9758-1639ebbb4d4d\") " Feb 02 14:06:04 crc kubenswrapper[4955]: I0202 14:06:04.133584 4955 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/42bc98e9-ad98-4525-9758-1639ebbb4d4d-host\") on node \"crc\" DevicePath \"\"" Feb 02 14:06:04 crc kubenswrapper[4955]: I0202 14:06:04.143507 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42bc98e9-ad98-4525-9758-1639ebbb4d4d-kube-api-access-qxbgg" (OuterVolumeSpecName: "kube-api-access-qxbgg") pod "42bc98e9-ad98-4525-9758-1639ebbb4d4d" (UID: "42bc98e9-ad98-4525-9758-1639ebbb4d4d"). InnerVolumeSpecName "kube-api-access-qxbgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:06:04 crc kubenswrapper[4955]: I0202 14:06:04.235290 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxbgg\" (UniqueName: \"kubernetes.io/projected/42bc98e9-ad98-4525-9758-1639ebbb4d4d-kube-api-access-qxbgg\") on node \"crc\" DevicePath \"\"" Feb 02 14:06:04 crc kubenswrapper[4955]: I0202 14:06:04.987117 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="006af745c5a0d58bb739cbdc46ac2cbcb24ebd3e7c2d1e36cec067230499a5bb" Feb 02 14:06:04 crc kubenswrapper[4955]: I0202 14:06:04.987218 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bshf/crc-debug-wcrpp" Feb 02 14:06:05 crc kubenswrapper[4955]: E0202 14:06:05.080428 4955 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42bc98e9_ad98_4525_9758_1639ebbb4d4d.slice\": RecentStats: unable to find data in memory cache]" Feb 02 14:06:05 crc kubenswrapper[4955]: I0202 14:06:05.328737 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6bshf/crc-debug-dhdb8"] Feb 02 14:06:05 crc kubenswrapper[4955]: E0202 14:06:05.329592 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42bc98e9-ad98-4525-9758-1639ebbb4d4d" containerName="container-00" Feb 02 14:06:05 crc kubenswrapper[4955]: I0202 14:06:05.329609 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="42bc98e9-ad98-4525-9758-1639ebbb4d4d" containerName="container-00" Feb 02 14:06:05 crc kubenswrapper[4955]: I0202 14:06:05.329856 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="42bc98e9-ad98-4525-9758-1639ebbb4d4d" containerName="container-00" Feb 02 14:06:05 crc kubenswrapper[4955]: I0202 14:06:05.330666 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bshf/crc-debug-dhdb8" Feb 02 14:06:05 crc kubenswrapper[4955]: I0202 14:06:05.461229 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2abf2079-37d3-4d94-ba8f-b853b6e85b56-host\") pod \"crc-debug-dhdb8\" (UID: \"2abf2079-37d3-4d94-ba8f-b853b6e85b56\") " pod="openshift-must-gather-6bshf/crc-debug-dhdb8" Feb 02 14:06:05 crc kubenswrapper[4955]: I0202 14:06:05.461803 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrj99\" (UniqueName: \"kubernetes.io/projected/2abf2079-37d3-4d94-ba8f-b853b6e85b56-kube-api-access-lrj99\") pod \"crc-debug-dhdb8\" (UID: \"2abf2079-37d3-4d94-ba8f-b853b6e85b56\") " pod="openshift-must-gather-6bshf/crc-debug-dhdb8" Feb 02 14:06:05 crc kubenswrapper[4955]: I0202 14:06:05.563796 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrj99\" (UniqueName: \"kubernetes.io/projected/2abf2079-37d3-4d94-ba8f-b853b6e85b56-kube-api-access-lrj99\") pod \"crc-debug-dhdb8\" (UID: \"2abf2079-37d3-4d94-ba8f-b853b6e85b56\") " pod="openshift-must-gather-6bshf/crc-debug-dhdb8" Feb 02 14:06:05 crc kubenswrapper[4955]: I0202 14:06:05.563921 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2abf2079-37d3-4d94-ba8f-b853b6e85b56-host\") pod \"crc-debug-dhdb8\" (UID: \"2abf2079-37d3-4d94-ba8f-b853b6e85b56\") " pod="openshift-must-gather-6bshf/crc-debug-dhdb8" Feb 02 14:06:05 crc kubenswrapper[4955]: I0202 14:06:05.564132 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2abf2079-37d3-4d94-ba8f-b853b6e85b56-host\") pod \"crc-debug-dhdb8\" (UID: \"2abf2079-37d3-4d94-ba8f-b853b6e85b56\") " pod="openshift-must-gather-6bshf/crc-debug-dhdb8" Feb 02 14:06:05 crc kubenswrapper[4955]: I0202 14:06:05.591646 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrj99\" (UniqueName: \"kubernetes.io/projected/2abf2079-37d3-4d94-ba8f-b853b6e85b56-kube-api-access-lrj99\") pod \"crc-debug-dhdb8\" (UID: \"2abf2079-37d3-4d94-ba8f-b853b6e85b56\") " pod="openshift-must-gather-6bshf/crc-debug-dhdb8" Feb 02 14:06:05 crc kubenswrapper[4955]: I0202 14:06:05.656033 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bshf/crc-debug-dhdb8" Feb 02 14:06:05 crc kubenswrapper[4955]: I0202 14:06:05.716064 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:06:05 crc kubenswrapper[4955]: E0202 14:06:05.716286 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:06:05 crc kubenswrapper[4955]: I0202 14:06:05.729647 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42bc98e9-ad98-4525-9758-1639ebbb4d4d" path="/var/lib/kubelet/pods/42bc98e9-ad98-4525-9758-1639ebbb4d4d/volumes" Feb 02 14:06:05 crc kubenswrapper[4955]: I0202 14:06:05.998187 4955 generic.go:334] "Generic (PLEG): container finished" podID="2abf2079-37d3-4d94-ba8f-b853b6e85b56" containerID="42f49fd99c095f82fffec18a398730c16002574b0df71dc326764f281cab949e" exitCode=1 Feb 02 14:06:05 crc kubenswrapper[4955]: I0202 14:06:05.998263 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bshf/crc-debug-dhdb8" event={"ID":"2abf2079-37d3-4d94-ba8f-b853b6e85b56","Type":"ContainerDied","Data":"42f49fd99c095f82fffec18a398730c16002574b0df71dc326764f281cab949e"} Feb 02 14:06:05 crc kubenswrapper[4955]: I0202 14:06:05.998519 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bshf/crc-debug-dhdb8" event={"ID":"2abf2079-37d3-4d94-ba8f-b853b6e85b56","Type":"ContainerStarted","Data":"ad9f25e046487143175ef0f7ed19d6b6536234e5c9e4cc39e0ed75cc743cf90c"} Feb 02 14:06:06 crc kubenswrapper[4955]: I0202 14:06:06.037450 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6bshf/crc-debug-dhdb8"] Feb 02 14:06:06 crc kubenswrapper[4955]: I0202 14:06:06.046351 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6bshf/crc-debug-dhdb8"] Feb 02 14:06:07 crc kubenswrapper[4955]: I0202 14:06:07.102463 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bshf/crc-debug-dhdb8" Feb 02 14:06:07 crc kubenswrapper[4955]: I0202 14:06:07.194471 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2abf2079-37d3-4d94-ba8f-b853b6e85b56-host\") pod \"2abf2079-37d3-4d94-ba8f-b853b6e85b56\" (UID: \"2abf2079-37d3-4d94-ba8f-b853b6e85b56\") " Feb 02 14:06:07 crc kubenswrapper[4955]: I0202 14:06:07.194575 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrj99\" (UniqueName: \"kubernetes.io/projected/2abf2079-37d3-4d94-ba8f-b853b6e85b56-kube-api-access-lrj99\") pod \"2abf2079-37d3-4d94-ba8f-b853b6e85b56\" (UID: \"2abf2079-37d3-4d94-ba8f-b853b6e85b56\") " Feb 02 14:06:07 crc kubenswrapper[4955]: I0202 14:06:07.194597 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2abf2079-37d3-4d94-ba8f-b853b6e85b56-host" (OuterVolumeSpecName: "host") pod "2abf2079-37d3-4d94-ba8f-b853b6e85b56" (UID: "2abf2079-37d3-4d94-ba8f-b853b6e85b56"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:06:07 crc kubenswrapper[4955]: I0202 14:06:07.194994 4955 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2abf2079-37d3-4d94-ba8f-b853b6e85b56-host\") on node \"crc\" DevicePath \"\"" Feb 02 14:06:07 crc kubenswrapper[4955]: I0202 14:06:07.201361 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2abf2079-37d3-4d94-ba8f-b853b6e85b56-kube-api-access-lrj99" (OuterVolumeSpecName: "kube-api-access-lrj99") pod "2abf2079-37d3-4d94-ba8f-b853b6e85b56" (UID: "2abf2079-37d3-4d94-ba8f-b853b6e85b56"). InnerVolumeSpecName "kube-api-access-lrj99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:06:07 crc kubenswrapper[4955]: I0202 14:06:07.296661 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrj99\" (UniqueName: \"kubernetes.io/projected/2abf2079-37d3-4d94-ba8f-b853b6e85b56-kube-api-access-lrj99\") on node \"crc\" DevicePath \"\"" Feb 02 14:06:07 crc kubenswrapper[4955]: I0202 14:06:07.732717 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2abf2079-37d3-4d94-ba8f-b853b6e85b56" path="/var/lib/kubelet/pods/2abf2079-37d3-4d94-ba8f-b853b6e85b56/volumes" Feb 02 14:06:08 crc kubenswrapper[4955]: I0202 14:06:08.018153 4955 scope.go:117] "RemoveContainer" containerID="42f49fd99c095f82fffec18a398730c16002574b0df71dc326764f281cab949e" Feb 02 14:06:08 crc kubenswrapper[4955]: I0202 14:06:08.018171 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bshf/crc-debug-dhdb8" Feb 02 14:06:16 crc kubenswrapper[4955]: I0202 14:06:16.716446 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:06:16 crc kubenswrapper[4955]: E0202 14:06:16.717369 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:06:29 crc kubenswrapper[4955]: I0202 14:06:29.735901 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:06:29 crc kubenswrapper[4955]: E0202 14:06:29.738960 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:06:42 crc kubenswrapper[4955]: I0202 14:06:42.716609 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:06:42 crc kubenswrapper[4955]: E0202 14:06:42.717866 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:06:56 crc kubenswrapper[4955]: I0202 14:06:56.358584 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_faac39c6-c177-4578-8943-1745793fd9cf/init-config-reloader/0.log" Feb 02 14:06:56 crc kubenswrapper[4955]: I0202 14:06:56.608210 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_faac39c6-c177-4578-8943-1745793fd9cf/alertmanager/0.log" Feb 02 14:06:56 crc kubenswrapper[4955]: I0202 14:06:56.625336 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_faac39c6-c177-4578-8943-1745793fd9cf/config-reloader/0.log" Feb 02 14:06:56 crc kubenswrapper[4955]: I0202 14:06:56.637442 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_faac39c6-c177-4578-8943-1745793fd9cf/init-config-reloader/0.log" Feb 02 14:06:56 crc kubenswrapper[4955]: I0202 14:06:56.811357 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-78f69789f4-g5k2m_337dae88-5440-410a-8af7-1edfd336449f/barbican-api/0.log" Feb 02 14:06:56 crc kubenswrapper[4955]: I0202 14:06:56.845500 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-78f69789f4-g5k2m_337dae88-5440-410a-8af7-1edfd336449f/barbican-api-log/0.log" Feb 02 14:06:56 crc kubenswrapper[4955]: I0202 14:06:56.939866 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b4bfb45d6-658rw_f8304221-9734-4385-80ea-be1ad2824ac1/barbican-keystone-listener/0.log" Feb 02 14:06:57 crc kubenswrapper[4955]: I0202 14:06:57.048200 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b4bfb45d6-658rw_f8304221-9734-4385-80ea-be1ad2824ac1/barbican-keystone-listener-log/0.log" Feb 02 14:06:57 crc kubenswrapper[4955]: I0202 14:06:57.126148 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-664579f9ff-tk2xr_b940320a-acf4-4bf3-88b4-00a1689be1c5/barbican-worker/0.log" Feb 02 14:06:57 crc kubenswrapper[4955]: I0202 14:06:57.168635 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-664579f9ff-tk2xr_b940320a-acf4-4bf3-88b4-00a1689be1c5/barbican-worker-log/0.log" Feb 02 14:06:57 crc kubenswrapper[4955]: I0202 14:06:57.328799 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-8tsf4_9d101010-cfe5-49b0-b956-df76cc70abfc/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 14:06:57 crc kubenswrapper[4955]: I0202 14:06:57.382434 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4fbbb588-ff7e-4242-a370-8943bf57604e/ceilometer-central-agent/0.log" Feb 02 14:06:57 crc kubenswrapper[4955]: I0202 14:06:57.524593 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4fbbb588-ff7e-4242-a370-8943bf57604e/proxy-httpd/0.log" Feb 02 14:06:57 crc kubenswrapper[4955]: I0202 14:06:57.533454 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4fbbb588-ff7e-4242-a370-8943bf57604e/ceilometer-notification-agent/0.log" Feb 02 14:06:57 crc kubenswrapper[4955]: I0202 14:06:57.579938 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4fbbb588-ff7e-4242-a370-8943bf57604e/sg-core/0.log" Feb 02 14:06:57 crc kubenswrapper[4955]: I0202 14:06:57.717134 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:06:57 crc kubenswrapper[4955]: E0202 14:06:57.717397 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:06:57 crc kubenswrapper[4955]: I0202 14:06:57.740194 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_69d89f97-f871-47b8-ac3f-e0f2e4a8242a/cinder-api-log/0.log" Feb 02 14:06:57 crc kubenswrapper[4955]: I0202 14:06:57.763549 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_69d89f97-f871-47b8-ac3f-e0f2e4a8242a/cinder-api/0.log" Feb 02 14:06:58 crc kubenswrapper[4955]: I0202 14:06:58.036050 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_12de4219-046b-4e44-bfa5-ec028fad8812/cinder-scheduler/0.log" Feb 02 14:06:58 crc kubenswrapper[4955]: I0202 14:06:58.154809 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_12de4219-046b-4e44-bfa5-ec028fad8812/probe/0.log" Feb 02 14:06:58 crc kubenswrapper[4955]: I0202 14:06:58.212007 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-pg7cb_e6c973ae-9b03-4511-abfc-360377684859/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 14:06:58 crc kubenswrapper[4955]: I0202 14:06:58.375127 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-9288j_4d22bd39-e9c6-456d-98ea-5adcc4e3aa64/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 14:06:58 crc kubenswrapper[4955]: I0202 14:06:58.449930 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-f64xq_daa79467-2e3c-4fa1-b8d0-ca5af14ed437/init/0.log" Feb 02 14:06:58 crc kubenswrapper[4955]: I0202 14:06:58.589336 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-f64xq_daa79467-2e3c-4fa1-b8d0-ca5af14ed437/init/0.log" Feb 02 14:06:58 crc kubenswrapper[4955]: I0202 14:06:58.633412 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-f64xq_daa79467-2e3c-4fa1-b8d0-ca5af14ed437/dnsmasq-dns/0.log" Feb 02 14:06:58 crc kubenswrapper[4955]: I0202 14:06:58.680692 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-n2dbq_fa5c1aeb-8726-4269-89d0-fe07ca5c6c29/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 14:06:58 crc kubenswrapper[4955]: I0202 14:06:58.800310 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_342df619-1ebd-498c-9199-5c48a35fb732/glance-httpd/0.log" Feb 02 14:06:58 crc kubenswrapper[4955]: I0202 14:06:58.867826 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_342df619-1ebd-498c-9199-5c48a35fb732/glance-log/0.log" Feb 02 14:06:59 crc kubenswrapper[4955]: I0202 14:06:59.017198 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f22f6239-6425-47b5-9e00-664ff50c02dc/glance-httpd/0.log" Feb 02 14:06:59 crc kubenswrapper[4955]: I0202 14:06:59.035062 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f22f6239-6425-47b5-9e00-664ff50c02dc/glance-log/0.log" Feb 02 14:06:59 crc kubenswrapper[4955]: I0202 14:06:59.575696 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-9585f6f46-wl58s_f8e3a0ef-22c1-4f58-bd10-175541f41d88/heat-engine/0.log" Feb 02 14:06:59 crc kubenswrapper[4955]: I0202 14:06:59.644469 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-59594c5dbd-8r97t_8d242088-395a-4a37-abec-5a0a15e68d91/heat-api/0.log" Feb 02 14:06:59 crc kubenswrapper[4955]: I0202 14:06:59.668403 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-77c488fc4f-5xxjg_f7babdce-fcbe-452c-ac21-041d0cebf985/heat-cfnapi/0.log" Feb 02 14:06:59 crc kubenswrapper[4955]: I0202 14:06:59.827352 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-c8qkt_91b67e94-3e12-478a-8691-2768084b6229/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 14:06:59 crc kubenswrapper[4955]: I0202 14:06:59.885174 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-28wwz_9601eed5-d546-4700-b8a6-99a577f72612/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 14:07:00 crc kubenswrapper[4955]: I0202 14:07:00.190515 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29500681-j9g6c_d1f2b1c6-35da-4e81-9161-265f0009f0e0/keystone-cron/0.log" Feb 02 14:07:00 crc kubenswrapper[4955]: I0202 14:07:00.223008 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-79d6d6d4df-7xvpd_71251c83-5af4-4373-8a77-0522fcda630e/keystone-api/0.log" Feb 02 14:07:00 crc kubenswrapper[4955]: I0202 14:07:00.408645 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_7a7cd6f0-72ce-4cdd-ad99-481f5e3f8333/kube-state-metrics/0.log" Feb 02 14:07:00 crc kubenswrapper[4955]: I0202 14:07:00.524685 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-sb6bg_85d18e13-5f42-4f3a-841c-2e900264b6a1/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 14:07:00 crc kubenswrapper[4955]: I0202 14:07:00.830407 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-74cbd57d57-fclbt_41553210-5298-4320-8317-50cb36029594/neutron-api/0.log" Feb 02 14:07:00 crc kubenswrapper[4955]: I0202 14:07:00.876245 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-74cbd57d57-fclbt_41553210-5298-4320-8317-50cb36029594/neutron-httpd/0.log" Feb 02 14:07:01 crc kubenswrapper[4955]: I0202 14:07:01.034174 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-gc44w_d5685fb2-ca69-402f-baee-9fb7b6ba6dba/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 14:07:01 crc kubenswrapper[4955]: I0202 14:07:01.358767 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f9056a98-ddc3-4c1b-8c5d-25a03e6163ce/nova-api-log/0.log" Feb 02 14:07:01 crc kubenswrapper[4955]: I0202 14:07:01.451831 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_f768e1f2-169e-4773-8173-fd0d4f57d90d/nova-cell0-conductor-conductor/0.log" Feb 02 14:07:01 crc kubenswrapper[4955]: I0202 14:07:01.598301 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f9056a98-ddc3-4c1b-8c5d-25a03e6163ce/nova-api-api/0.log" Feb 02 14:07:01 crc kubenswrapper[4955]: I0202 14:07:01.701469 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_3da5793f-fd0c-4e87-9e72-dbd21447e050/nova-cell1-conductor-conductor/0.log" Feb 02 14:07:01 crc kubenswrapper[4955]: I0202 14:07:01.933266 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6ba337f5-4c96-4bcd-ad8e-9e12ebc857ce/nova-cell1-novncproxy-novncproxy/0.log" Feb 02 14:07:01 crc kubenswrapper[4955]: I0202 14:07:01.971229 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-xjd98_254c47a9-edbb-46bb-8c4a-72395aa8f8b0/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 14:07:02 crc kubenswrapper[4955]: I0202 14:07:02.240213 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_21d4cabc-0090-40ac-8afb-15ef9def8f7d/nova-metadata-log/0.log" Feb 02 14:07:02 crc kubenswrapper[4955]: I0202 14:07:02.375648 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ffcfc78d-bc88-4849-843b-106dbb020bb4/nova-scheduler-scheduler/0.log" Feb 02 14:07:02 crc kubenswrapper[4955]: I0202 14:07:02.484734 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d3dda1e4-d043-4acc-ba59-2c64762956be/mysql-bootstrap/0.log" Feb 02 14:07:02 crc kubenswrapper[4955]: I0202 14:07:02.743991 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d3dda1e4-d043-4acc-ba59-2c64762956be/galera/0.log" Feb 02 14:07:02 crc kubenswrapper[4955]: I0202 14:07:02.759265 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d3dda1e4-d043-4acc-ba59-2c64762956be/mysql-bootstrap/0.log" Feb 02 14:07:02 crc kubenswrapper[4955]: I0202 14:07:02.965353 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d77aebb6-5e14-4958-b762-6e1f2e2c236e/mysql-bootstrap/0.log" Feb 02 14:07:03 crc kubenswrapper[4955]: I0202 14:07:03.162027 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d77aebb6-5e14-4958-b762-6e1f2e2c236e/mysql-bootstrap/0.log" Feb 02 14:07:03 crc kubenswrapper[4955]: I0202 14:07:03.189111 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d77aebb6-5e14-4958-b762-6e1f2e2c236e/galera/0.log" Feb 02 14:07:03 crc kubenswrapper[4955]: I0202 14:07:03.352153 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_b9b94fc7-9059-4c45-b19d-ccca3f345be1/openstackclient/0.log" Feb 02 14:07:03 crc kubenswrapper[4955]: I0202 14:07:03.481178 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-zvbpv_6a847a81-83ab-4560-924b-9051e0322672/openstack-network-exporter/0.log" Feb 02 14:07:03 crc kubenswrapper[4955]: I0202 14:07:03.536202 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_21d4cabc-0090-40ac-8afb-15ef9def8f7d/nova-metadata-metadata/0.log" Feb 02 14:07:03 crc kubenswrapper[4955]: I0202 14:07:03.704003 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-h5j4t_b42e4a2b-d820-45ba-afdf-ab9e0a6787a9/ovsdb-server-init/0.log" Feb 02 14:07:03 crc kubenswrapper[4955]: I0202 14:07:03.929688 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-h5j4t_b42e4a2b-d820-45ba-afdf-ab9e0a6787a9/ovs-vswitchd/0.log" Feb 02 14:07:03 crc kubenswrapper[4955]: I0202 14:07:03.942855 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-h5j4t_b42e4a2b-d820-45ba-afdf-ab9e0a6787a9/ovsdb-server-init/0.log" Feb 02 14:07:03 crc kubenswrapper[4955]: I0202 14:07:03.981097 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-h5j4t_b42e4a2b-d820-45ba-afdf-ab9e0a6787a9/ovsdb-server/0.log" Feb 02 14:07:04 crc kubenswrapper[4955]: I0202 14:07:04.162004 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-p7twj_480861f6-44ea-41c3-806e-497f3177eb91/ovn-controller/0.log" Feb 02 14:07:04 crc kubenswrapper[4955]: I0202 14:07:04.248238 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-wk7cw_0a7ecd9e-7038-4eda-ae0b-833ecf729f48/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 14:07:04 crc kubenswrapper[4955]: I0202 14:07:04.422494 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6cbbc048-2495-466f-9649-8e95698e29d8/openstack-network-exporter/0.log" Feb 02 14:07:04 crc kubenswrapper[4955]: I0202 14:07:04.482829 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2856fe37-3113-44d2-ac52-f28f9d5aba38/openstack-network-exporter/0.log" Feb 02 14:07:04 crc kubenswrapper[4955]: I0202 14:07:04.540966 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6cbbc048-2495-466f-9649-8e95698e29d8/ovn-northd/0.log" Feb 02 14:07:04 crc kubenswrapper[4955]: I0202 14:07:04.622072 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2856fe37-3113-44d2-ac52-f28f9d5aba38/ovsdbserver-nb/0.log" Feb 02 14:07:04 crc kubenswrapper[4955]: I0202 14:07:04.816241 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_43af4b7b-306d-4c1d-9947-f4749eeed848/openstack-network-exporter/0.log" Feb 02 14:07:04 crc kubenswrapper[4955]: I0202 14:07:04.820084 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_43af4b7b-306d-4c1d-9947-f4749eeed848/ovsdbserver-sb/0.log" Feb 02 14:07:05 crc kubenswrapper[4955]: I0202 14:07:05.058740 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-cd4978596-vxlhp_fb0085d2-b778-45fa-9352-ae25b43713c1/placement-api/0.log" Feb 02 14:07:05 crc kubenswrapper[4955]: I0202 14:07:05.098935 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-cd4978596-vxlhp_fb0085d2-b778-45fa-9352-ae25b43713c1/placement-log/0.log" Feb 02 14:07:05 crc kubenswrapper[4955]: I0202 14:07:05.153376 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_834ba568-0b9b-45d2-8479-1d96bcb78803/init-config-reloader/0.log" Feb 02 14:07:05 crc kubenswrapper[4955]: I0202 14:07:05.328487 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_834ba568-0b9b-45d2-8479-1d96bcb78803/init-config-reloader/0.log" Feb 02 14:07:05 crc kubenswrapper[4955]: I0202 14:07:05.387736 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_834ba568-0b9b-45d2-8479-1d96bcb78803/prometheus/0.log" Feb 02 14:07:05 crc kubenswrapper[4955]: I0202 14:07:05.443244 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_834ba568-0b9b-45d2-8479-1d96bcb78803/thanos-sidecar/0.log" Feb 02 14:07:05 crc kubenswrapper[4955]: I0202 14:07:05.449190 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_834ba568-0b9b-45d2-8479-1d96bcb78803/config-reloader/0.log" Feb 02 14:07:05 crc kubenswrapper[4955]: I0202 14:07:05.623737 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bd53290f-8544-4e49-9f1b-8f3bc28332fc/setup-container/0.log" Feb 02 14:07:05 crc kubenswrapper[4955]: I0202 14:07:05.870445 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_94b3af0f-2cf7-46b2-8558-fd172852d771/setup-container/0.log" Feb 02 14:07:05 crc kubenswrapper[4955]: I0202 14:07:05.883608 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bd53290f-8544-4e49-9f1b-8f3bc28332fc/rabbitmq/0.log" Feb 02 14:07:05 crc kubenswrapper[4955]: I0202 14:07:05.920621 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bd53290f-8544-4e49-9f1b-8f3bc28332fc/setup-container/0.log" Feb 02 14:07:06 crc kubenswrapper[4955]: I0202 14:07:06.113044 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_94b3af0f-2cf7-46b2-8558-fd172852d771/setup-container/0.log" Feb 02 14:07:06 crc kubenswrapper[4955]: I0202 14:07:06.118288 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_94b3af0f-2cf7-46b2-8558-fd172852d771/rabbitmq/0.log" Feb 02 14:07:06 crc kubenswrapper[4955]: I0202 14:07:06.173661 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-tgn2s_660c8287-e36e-4216-8834-45913aa22480/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 14:07:06 crc kubenswrapper[4955]: I0202 14:07:06.351652 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-drkmq_ac761814-9187-4129-8167-eb4fda3b94d8/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 14:07:06 crc kubenswrapper[4955]: I0202 14:07:06.529344 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-k7d4q_e45f9d86-932f-4539-a7f3-f302ae6dfd53/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 14:07:06 crc kubenswrapper[4955]: I0202 14:07:06.576418 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-qcxnr_4babc7e9-4c02-4643-a14f-719526d95e55/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 14:07:06 crc kubenswrapper[4955]: I0202 14:07:06.778289 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-657gr_5e4929a1-70d3-4e76-a561-c35bfc07b562/ssh-known-hosts-edpm-deployment/0.log" Feb 02 14:07:06 crc kubenswrapper[4955]: I0202 14:07:06.986702 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-79fb55657c-85sjk_91116c53-5321-4170-9ec0-1c0588b81355/proxy-server/0.log" Feb 02 14:07:06 crc kubenswrapper[4955]: I0202 14:07:06.988714 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-79fb55657c-85sjk_91116c53-5321-4170-9ec0-1c0588b81355/proxy-httpd/0.log" Feb 02 14:07:07 crc kubenswrapper[4955]: I0202 14:07:07.073016 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-dv4r7_b3cb6dc2-d198-405c-816a-dd3ddb578ed4/swift-ring-rebalance/0.log" Feb 02 14:07:07 crc kubenswrapper[4955]: I0202 14:07:07.266124 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec1a6503-248d-4f72-a3ab-e23df2ca163d/account-auditor/0.log" Feb 02 14:07:07 crc kubenswrapper[4955]: I0202 14:07:07.337804 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec1a6503-248d-4f72-a3ab-e23df2ca163d/account-reaper/0.log" Feb 02 14:07:07 crc kubenswrapper[4955]: I0202 14:07:07.354984 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec1a6503-248d-4f72-a3ab-e23df2ca163d/account-replicator/0.log" Feb 02 14:07:07 crc kubenswrapper[4955]: I0202 14:07:07.489979 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec1a6503-248d-4f72-a3ab-e23df2ca163d/container-auditor/0.log" Feb 02 14:07:07 crc kubenswrapper[4955]: I0202 14:07:07.520807 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec1a6503-248d-4f72-a3ab-e23df2ca163d/account-server/0.log" Feb 02 14:07:07 crc kubenswrapper[4955]: I0202 14:07:07.578972 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec1a6503-248d-4f72-a3ab-e23df2ca163d/container-server/0.log" Feb 02 14:07:07 crc kubenswrapper[4955]: I0202 14:07:07.640758 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec1a6503-248d-4f72-a3ab-e23df2ca163d/container-replicator/0.log" Feb 02 14:07:07 crc kubenswrapper[4955]: I0202 14:07:07.750308 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec1a6503-248d-4f72-a3ab-e23df2ca163d/container-updater/0.log" Feb 02 14:07:07 crc kubenswrapper[4955]: I0202 14:07:07.795660 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec1a6503-248d-4f72-a3ab-e23df2ca163d/object-auditor/0.log" Feb 02 14:07:07 crc kubenswrapper[4955]: I0202 14:07:07.825029 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec1a6503-248d-4f72-a3ab-e23df2ca163d/object-expirer/0.log" Feb 02 14:07:07 crc kubenswrapper[4955]: I0202 14:07:07.947471 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec1a6503-248d-4f72-a3ab-e23df2ca163d/object-replicator/0.log" Feb 02 14:07:08 crc kubenswrapper[4955]: I0202 14:07:08.001042 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec1a6503-248d-4f72-a3ab-e23df2ca163d/object-server/0.log" Feb 02 14:07:08 crc kubenswrapper[4955]: I0202 14:07:08.022481 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec1a6503-248d-4f72-a3ab-e23df2ca163d/object-updater/0.log" Feb 02 14:07:08 crc kubenswrapper[4955]: I0202 14:07:08.049274 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec1a6503-248d-4f72-a3ab-e23df2ca163d/rsync/0.log" Feb 02 14:07:08 crc kubenswrapper[4955]: I0202 14:07:08.227372 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec1a6503-248d-4f72-a3ab-e23df2ca163d/swift-recon-cron/0.log" Feb 02 14:07:08 crc kubenswrapper[4955]: I0202 14:07:08.355365 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-qcfcv_95a8c6f1-eab8-467a-9f76-5827e0c35a83/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 14:07:08 crc kubenswrapper[4955]: I0202 14:07:08.615091 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-nfqll_34b65653-8bae-4ebf-a7c8-1de410bed9ac/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 14:07:08 crc kubenswrapper[4955]: I0202 14:07:08.716299 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:07:09 crc kubenswrapper[4955]: I0202 14:07:09.591547 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerStarted","Data":"917fe2126cbb3a7d8937c8ede6f6e0eb83a794f53a436a37d5cee320374819f9"} Feb 02 14:07:15 crc kubenswrapper[4955]: I0202 14:07:15.166138 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_4f33de66-48c8-4dfb-954d-bf70e5791e04/memcached/0.log" Feb 02 14:07:24 crc kubenswrapper[4955]: I0202 14:07:24.217457 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pf424"] Feb 02 14:07:24 crc kubenswrapper[4955]: E0202 14:07:24.218432 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2abf2079-37d3-4d94-ba8f-b853b6e85b56" containerName="container-00" Feb 02 14:07:24 crc kubenswrapper[4955]: I0202 14:07:24.218447 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2abf2079-37d3-4d94-ba8f-b853b6e85b56" containerName="container-00" Feb 02 14:07:24 crc kubenswrapper[4955]: I0202 14:07:24.218652 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="2abf2079-37d3-4d94-ba8f-b853b6e85b56" containerName="container-00" Feb 02 14:07:24 crc kubenswrapper[4955]: I0202 14:07:24.220391 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pf424" Feb 02 14:07:24 crc kubenswrapper[4955]: I0202 14:07:24.229371 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pf424"] Feb 02 14:07:24 crc kubenswrapper[4955]: I0202 14:07:24.368337 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1-utilities\") pod \"certified-operators-pf424\" (UID: \"e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1\") " pod="openshift-marketplace/certified-operators-pf424" Feb 02 14:07:24 crc kubenswrapper[4955]: I0202 14:07:24.368504 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbnmt\" (UniqueName: \"kubernetes.io/projected/e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1-kube-api-access-pbnmt\") pod \"certified-operators-pf424\" (UID: \"e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1\") " pod="openshift-marketplace/certified-operators-pf424" Feb 02 14:07:24 crc kubenswrapper[4955]: I0202 14:07:24.368733 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1-catalog-content\") pod \"certified-operators-pf424\" (UID: \"e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1\") " pod="openshift-marketplace/certified-operators-pf424" Feb 02 14:07:24 crc kubenswrapper[4955]: I0202 14:07:24.470353 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1-utilities\") pod \"certified-operators-pf424\" (UID: \"e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1\") " pod="openshift-marketplace/certified-operators-pf424" Feb 02 14:07:24 crc kubenswrapper[4955]: I0202 14:07:24.470705 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbnmt\" (UniqueName: \"kubernetes.io/projected/e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1-kube-api-access-pbnmt\") pod \"certified-operators-pf424\" (UID: \"e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1\") " pod="openshift-marketplace/certified-operators-pf424" Feb 02 14:07:24 crc kubenswrapper[4955]: I0202 14:07:24.470860 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1-catalog-content\") pod \"certified-operators-pf424\" (UID: \"e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1\") " pod="openshift-marketplace/certified-operators-pf424" Feb 02 14:07:24 crc kubenswrapper[4955]: I0202 14:07:24.471013 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1-utilities\") pod \"certified-operators-pf424\" (UID: \"e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1\") " pod="openshift-marketplace/certified-operators-pf424" Feb 02 14:07:24 crc kubenswrapper[4955]: I0202 14:07:24.471269 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1-catalog-content\") pod \"certified-operators-pf424\" (UID: \"e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1\") " pod="openshift-marketplace/certified-operators-pf424" Feb 02 14:07:24 crc kubenswrapper[4955]: I0202 14:07:24.491129 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbnmt\" (UniqueName: \"kubernetes.io/projected/e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1-kube-api-access-pbnmt\") pod \"certified-operators-pf424\" (UID: \"e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1\") " pod="openshift-marketplace/certified-operators-pf424" Feb 02 14:07:24 crc kubenswrapper[4955]: I0202 14:07:24.588756 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pf424" Feb 02 14:07:25 crc kubenswrapper[4955]: I0202 14:07:25.113545 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pf424"] Feb 02 14:07:25 crc kubenswrapper[4955]: I0202 14:07:25.740774 4955 generic.go:334] "Generic (PLEG): container finished" podID="e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1" containerID="85d2653edd6c2b5a85e45ff9e1dd817aa6232860b4575bea2ff5a3a62fed367a" exitCode=0 Feb 02 14:07:25 crc kubenswrapper[4955]: I0202 14:07:25.740837 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf424" event={"ID":"e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1","Type":"ContainerDied","Data":"85d2653edd6c2b5a85e45ff9e1dd817aa6232860b4575bea2ff5a3a62fed367a"} Feb 02 14:07:25 crc kubenswrapper[4955]: I0202 14:07:25.741304 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf424" event={"ID":"e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1","Type":"ContainerStarted","Data":"23a04f865759fbc730eea1f575f846c2d73c894440fa346f42980f7d72781a3b"} Feb 02 14:07:26 crc kubenswrapper[4955]: I0202 14:07:26.751842 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf424" event={"ID":"e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1","Type":"ContainerStarted","Data":"093f444db14387de2b7305d1c2c0ba14db8a71e162b9a5eb35842da21c65b055"} Feb 02 14:07:28 crc kubenswrapper[4955]: I0202 14:07:28.771726 4955 generic.go:334] "Generic (PLEG): container finished" podID="e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1" containerID="093f444db14387de2b7305d1c2c0ba14db8a71e162b9a5eb35842da21c65b055" exitCode=0 Feb 02 14:07:28 crc kubenswrapper[4955]: I0202 14:07:28.771801 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf424" event={"ID":"e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1","Type":"ContainerDied","Data":"093f444db14387de2b7305d1c2c0ba14db8a71e162b9a5eb35842da21c65b055"} Feb 02 14:07:29 crc kubenswrapper[4955]: I0202 14:07:29.786921 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf424" event={"ID":"e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1","Type":"ContainerStarted","Data":"56a8e2afa099bd39f975aff9f3d2aab2a5720b9f2bae854735113cc228574acd"} Feb 02 14:07:29 crc kubenswrapper[4955]: I0202 14:07:29.808174 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pf424" podStartSLOduration=2.35273432 podStartE2EDuration="5.808158432s" podCreationTimestamp="2026-02-02 14:07:24 +0000 UTC" firstStartedPulling="2026-02-02 14:07:25.743107966 +0000 UTC m=+3896.655444416" lastFinishedPulling="2026-02-02 14:07:29.198532078 +0000 UTC m=+3900.110868528" observedRunningTime="2026-02-02 14:07:29.80682868 +0000 UTC m=+3900.719165130" watchObservedRunningTime="2026-02-02 14:07:29.808158432 +0000 UTC m=+3900.720494872" Feb 02 14:07:33 crc kubenswrapper[4955]: I0202 14:07:33.631858 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7_89286ac8-71fd-4cef-b2d0-ef2ca01a87e9/util/0.log" Feb 02 14:07:33 crc kubenswrapper[4955]: I0202 14:07:33.848193 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7_89286ac8-71fd-4cef-b2d0-ef2ca01a87e9/util/0.log" Feb 02 14:07:33 crc kubenswrapper[4955]: I0202 14:07:33.879460 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7_89286ac8-71fd-4cef-b2d0-ef2ca01a87e9/pull/0.log" Feb 02 14:07:33 crc kubenswrapper[4955]: I0202 14:07:33.937350 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7_89286ac8-71fd-4cef-b2d0-ef2ca01a87e9/pull/0.log" Feb 02 14:07:34 crc kubenswrapper[4955]: I0202 14:07:34.099918 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7_89286ac8-71fd-4cef-b2d0-ef2ca01a87e9/util/0.log" Feb 02 14:07:34 crc kubenswrapper[4955]: I0202 14:07:34.137372 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7_89286ac8-71fd-4cef-b2d0-ef2ca01a87e9/pull/0.log" Feb 02 14:07:34 crc kubenswrapper[4955]: I0202 14:07:34.138202 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_83371bfe6bc114e95c3416290590934cdd2103012d7431f72a1f99a3e17jfb7_89286ac8-71fd-4cef-b2d0-ef2ca01a87e9/extract/0.log" Feb 02 14:07:34 crc kubenswrapper[4955]: I0202 14:07:34.379816 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-sd58m_7b0df3b7-68cf-4cb0-94b0-69b394da89c5/manager/0.log" Feb 02 14:07:34 crc kubenswrapper[4955]: I0202 14:07:34.381972 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-ts8vb_0e209e55-35cd-418f-902b-c16a5992677e/manager/0.log" Feb 02 14:07:34 crc kubenswrapper[4955]: I0202 14:07:34.513364 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-m2vf4_acac6a68-fe33-41eb-8f49-0fd47cc4f0d4/manager/0.log" Feb 02 14:07:34 crc kubenswrapper[4955]: I0202 14:07:34.588931 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pf424" Feb 02 14:07:34 crc kubenswrapper[4955]: I0202 14:07:34.588980 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pf424" Feb 02 14:07:34 crc kubenswrapper[4955]: I0202 14:07:34.644468 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-9jq6d_cc234403-1bdb-40c8-a931-62b193347ae7/manager/0.log" Feb 02 14:07:34 crc kubenswrapper[4955]: I0202 14:07:34.650072 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pf424" Feb 02 14:07:34 crc kubenswrapper[4955]: I0202 14:07:34.820917 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-7zjp6_f25860b5-436a-486b-9d7d-065f19ac7f68/manager/0.log" Feb 02 14:07:34 crc kubenswrapper[4955]: I0202 14:07:34.880598 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-99z75_3ad669b6-5937-4a7a-9d0b-b54da1542c6f/manager/0.log" Feb 02 14:07:34 crc kubenswrapper[4955]: I0202 14:07:34.891349 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pf424" Feb 02 14:07:34 crc kubenswrapper[4955]: I0202 14:07:34.941507 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pf424"] Feb 02 14:07:35 crc kubenswrapper[4955]: I0202 14:07:35.135355 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-vbqw7_43da80bd-2db6-4ee2-becb-fb97aa4e41bf/manager/0.log" Feb 02 14:07:35 crc kubenswrapper[4955]: I0202 14:07:35.250112 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-cphc8_b61aade3-b2b3-4a5f-9862-a2018e56ea03/manager/0.log" Feb 02 14:07:35 crc kubenswrapper[4955]: I0202 14:07:35.375603 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-6f8mt_697fa7c8-fb2d-411e-ad98-d7240bde28ae/manager/0.log" Feb 02 14:07:35 crc kubenswrapper[4955]: I0202 14:07:35.412925 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-2xj4n_30794b6d-3a42-4d85-bdb3-adaf55b73301/manager/0.log" Feb 02 14:07:35 crc kubenswrapper[4955]: I0202 14:07:35.582935 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-bw6fj_7e8608e6-cd83-4feb-ba63-261fc1a78437/manager/0.log" Feb 02 14:07:35 crc kubenswrapper[4955]: I0202 14:07:35.674608 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-ndfws_212960c6-7b05-4094-ade0-e957cb3b76c8/manager/0.log" Feb 02 14:07:35 crc kubenswrapper[4955]: I0202 14:07:35.875439 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-jh2sx_74a1c10d-25f8-4436-9762-ddcb86e6bb5e/manager/0.log" Feb 02 14:07:35 crc kubenswrapper[4955]: I0202 14:07:35.890495 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-nmvcl_fb69ed7c-575f-4abd-8bbd-a5a884e9333b/manager/0.log" Feb 02 14:07:36 crc kubenswrapper[4955]: I0202 14:07:36.011470 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dlws4v_79b15e2e-40e0-4677-b580-667f81fd3550/manager/0.log" Feb 02 14:07:36 crc kubenswrapper[4955]: I0202 14:07:36.186195 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-75bcbdb4c8-sfqbx_70ae8d8e-2a48-4fb8-afed-8a3cf2234982/operator/0.log" Feb 02 14:07:36 crc kubenswrapper[4955]: I0202 14:07:36.432180 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cqqqp_9f07b7c4-c1b4-4ac7-a623-9cbf6885cfed/registry-server/0.log" Feb 02 14:07:36 crc kubenswrapper[4955]: I0202 14:07:36.754205 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-29k76_14a7569a-24fd-4aea-828d-ada50de34686/manager/0.log" Feb 02 14:07:36 crc kubenswrapper[4955]: I0202 14:07:36.760578 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-q8lhq_937de608-a64a-40ab-8a80-90800c18cf8f/manager/0.log" Feb 02 14:07:36 crc kubenswrapper[4955]: I0202 14:07:36.851938 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pf424" podUID="e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1" containerName="registry-server" containerID="cri-o://56a8e2afa099bd39f975aff9f3d2aab2a5720b9f2bae854735113cc228574acd" gracePeriod=2 Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.023832 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-npj7b_f231eb80-56bc-48bd-b412-a4247d11317f/operator/0.log" Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.302224 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-hxf5r_ae3aa85d-803e-42ca-aff0-b2f5cb660355/manager/0.log" Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.402208 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5ff9b8ccfd-zh2nx_0e5f1bee-07dd-4eaf-9a3b-328845abb141/manager/0.log" Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.521520 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pf424" Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.616246 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-pwrfk_4e156b3e-40e8-4ade-af5b-10f05949e12b/manager/0.log" Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.620655 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbnmt\" (UniqueName: \"kubernetes.io/projected/e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1-kube-api-access-pbnmt\") pod \"e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1\" (UID: \"e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1\") " Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.620703 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1-utilities\") pod \"e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1\" (UID: \"e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1\") " Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.620776 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1-catalog-content\") pod \"e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1\" (UID: \"e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1\") " Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.622270 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1-utilities" (OuterVolumeSpecName: "utilities") pod "e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1" (UID: "e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.644802 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1-kube-api-access-pbnmt" (OuterVolumeSpecName: "kube-api-access-pbnmt") pod "e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1" (UID: "e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1"). InnerVolumeSpecName "kube-api-access-pbnmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.646514 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5fb457d788-z4g9c_6ad0132c-7b8d-4342-8668-23e66e695a6e/manager/0.log" Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.682608 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1" (UID: "e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.723279 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.723306 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbnmt\" (UniqueName: \"kubernetes.io/projected/e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1-kube-api-access-pbnmt\") on node \"crc\" DevicePath \"\"" Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.723316 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.810120 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-bnxrv_d0b0d88d-985e-4a4c-9f57-5ef95f00b9ac/manager/0.log" Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.863259 4955 generic.go:334] "Generic (PLEG): container finished" podID="e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1" containerID="56a8e2afa099bd39f975aff9f3d2aab2a5720b9f2bae854735113cc228574acd" exitCode=0 Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.863309 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf424" event={"ID":"e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1","Type":"ContainerDied","Data":"56a8e2afa099bd39f975aff9f3d2aab2a5720b9f2bae854735113cc228574acd"} Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.863327 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pf424" Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.863345 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf424" event={"ID":"e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1","Type":"ContainerDied","Data":"23a04f865759fbc730eea1f575f846c2d73c894440fa346f42980f7d72781a3b"} Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.863365 4955 scope.go:117] "RemoveContainer" containerID="56a8e2afa099bd39f975aff9f3d2aab2a5720b9f2bae854735113cc228574acd" Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.894161 4955 scope.go:117] "RemoveContainer" containerID="093f444db14387de2b7305d1c2c0ba14db8a71e162b9a5eb35842da21c65b055" Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.899615 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pf424"] Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.913612 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pf424"] Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.923510 4955 scope.go:117] "RemoveContainer" containerID="85d2653edd6c2b5a85e45ff9e1dd817aa6232860b4575bea2ff5a3a62fed367a" Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.971808 4955 scope.go:117] "RemoveContainer" containerID="56a8e2afa099bd39f975aff9f3d2aab2a5720b9f2bae854735113cc228574acd" Feb 02 14:07:37 crc kubenswrapper[4955]: E0202 14:07:37.972335 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56a8e2afa099bd39f975aff9f3d2aab2a5720b9f2bae854735113cc228574acd\": container with ID starting with 56a8e2afa099bd39f975aff9f3d2aab2a5720b9f2bae854735113cc228574acd not found: ID does not exist" containerID="56a8e2afa099bd39f975aff9f3d2aab2a5720b9f2bae854735113cc228574acd" Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.972408 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56a8e2afa099bd39f975aff9f3d2aab2a5720b9f2bae854735113cc228574acd"} err="failed to get container status \"56a8e2afa099bd39f975aff9f3d2aab2a5720b9f2bae854735113cc228574acd\": rpc error: code = NotFound desc = could not find container \"56a8e2afa099bd39f975aff9f3d2aab2a5720b9f2bae854735113cc228574acd\": container with ID starting with 56a8e2afa099bd39f975aff9f3d2aab2a5720b9f2bae854735113cc228574acd not found: ID does not exist" Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.972462 4955 scope.go:117] "RemoveContainer" containerID="093f444db14387de2b7305d1c2c0ba14db8a71e162b9a5eb35842da21c65b055" Feb 02 14:07:37 crc kubenswrapper[4955]: E0202 14:07:37.972860 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"093f444db14387de2b7305d1c2c0ba14db8a71e162b9a5eb35842da21c65b055\": container with ID starting with 093f444db14387de2b7305d1c2c0ba14db8a71e162b9a5eb35842da21c65b055 not found: ID does not exist" containerID="093f444db14387de2b7305d1c2c0ba14db8a71e162b9a5eb35842da21c65b055" Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.972900 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"093f444db14387de2b7305d1c2c0ba14db8a71e162b9a5eb35842da21c65b055"} err="failed to get container status \"093f444db14387de2b7305d1c2c0ba14db8a71e162b9a5eb35842da21c65b055\": rpc error: code = NotFound desc = could not find container \"093f444db14387de2b7305d1c2c0ba14db8a71e162b9a5eb35842da21c65b055\": container with ID starting with 093f444db14387de2b7305d1c2c0ba14db8a71e162b9a5eb35842da21c65b055 not found: ID does not exist" Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.972932 4955 scope.go:117] "RemoveContainer" containerID="85d2653edd6c2b5a85e45ff9e1dd817aa6232860b4575bea2ff5a3a62fed367a" Feb 02 14:07:37 crc kubenswrapper[4955]: E0202 14:07:37.973286 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85d2653edd6c2b5a85e45ff9e1dd817aa6232860b4575bea2ff5a3a62fed367a\": container with ID starting with 85d2653edd6c2b5a85e45ff9e1dd817aa6232860b4575bea2ff5a3a62fed367a not found: ID does not exist" containerID="85d2653edd6c2b5a85e45ff9e1dd817aa6232860b4575bea2ff5a3a62fed367a" Feb 02 14:07:37 crc kubenswrapper[4955]: I0202 14:07:37.973317 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85d2653edd6c2b5a85e45ff9e1dd817aa6232860b4575bea2ff5a3a62fed367a"} err="failed to get container status \"85d2653edd6c2b5a85e45ff9e1dd817aa6232860b4575bea2ff5a3a62fed367a\": rpc error: code = NotFound desc = could not find container \"85d2653edd6c2b5a85e45ff9e1dd817aa6232860b4575bea2ff5a3a62fed367a\": container with ID starting with 85d2653edd6c2b5a85e45ff9e1dd817aa6232860b4575bea2ff5a3a62fed367a not found: ID does not exist" Feb 02 14:07:39 crc kubenswrapper[4955]: I0202 14:07:39.725728 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1" path="/var/lib/kubelet/pods/e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1/volumes" Feb 02 14:07:49 crc kubenswrapper[4955]: I0202 14:07:49.997060 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mxvh5"] Feb 02 14:07:49 crc kubenswrapper[4955]: E0202 14:07:49.998178 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1" containerName="registry-server" Feb 02 14:07:49 crc kubenswrapper[4955]: I0202 14:07:49.998194 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1" containerName="registry-server" Feb 02 14:07:49 crc kubenswrapper[4955]: E0202 14:07:49.998229 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1" containerName="extract-utilities" Feb 02 14:07:49 crc kubenswrapper[4955]: I0202 14:07:49.998237 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1" containerName="extract-utilities" Feb 02 14:07:49 crc kubenswrapper[4955]: E0202 14:07:49.998265 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1" containerName="extract-content" Feb 02 14:07:49 crc kubenswrapper[4955]: I0202 14:07:49.998274 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1" containerName="extract-content" Feb 02 14:07:50 crc kubenswrapper[4955]: I0202 14:07:50.003156 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1c0f8d9-7ea7-4533-b2c3-1bfa35a138f1" containerName="registry-server" Feb 02 14:07:50 crc kubenswrapper[4955]: I0202 14:07:50.005270 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mxvh5" Feb 02 14:07:50 crc kubenswrapper[4955]: I0202 14:07:50.008365 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mxvh5"] Feb 02 14:07:50 crc kubenswrapper[4955]: I0202 14:07:50.091669 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e-catalog-content\") pod \"community-operators-mxvh5\" (UID: \"4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e\") " pod="openshift-marketplace/community-operators-mxvh5" Feb 02 14:07:50 crc kubenswrapper[4955]: I0202 14:07:50.091751 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlgsb\" (UniqueName: \"kubernetes.io/projected/4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e-kube-api-access-wlgsb\") pod \"community-operators-mxvh5\" (UID: \"4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e\") " pod="openshift-marketplace/community-operators-mxvh5" Feb 02 14:07:50 crc kubenswrapper[4955]: I0202 14:07:50.091782 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e-utilities\") pod \"community-operators-mxvh5\" (UID: \"4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e\") " pod="openshift-marketplace/community-operators-mxvh5" Feb 02 14:07:50 crc kubenswrapper[4955]: I0202 14:07:50.194141 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e-catalog-content\") pod \"community-operators-mxvh5\" (UID: \"4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e\") " pod="openshift-marketplace/community-operators-mxvh5" Feb 02 14:07:50 crc kubenswrapper[4955]: I0202 14:07:50.194262 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlgsb\" (UniqueName: \"kubernetes.io/projected/4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e-kube-api-access-wlgsb\") pod \"community-operators-mxvh5\" (UID: \"4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e\") " pod="openshift-marketplace/community-operators-mxvh5" Feb 02 14:07:50 crc kubenswrapper[4955]: I0202 14:07:50.194298 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e-utilities\") pod \"community-operators-mxvh5\" (UID: \"4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e\") " pod="openshift-marketplace/community-operators-mxvh5" Feb 02 14:07:50 crc kubenswrapper[4955]: I0202 14:07:50.194960 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e-utilities\") pod \"community-operators-mxvh5\" (UID: \"4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e\") " pod="openshift-marketplace/community-operators-mxvh5" Feb 02 14:07:50 crc kubenswrapper[4955]: I0202 14:07:50.195242 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e-catalog-content\") pod \"community-operators-mxvh5\" (UID: \"4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e\") " pod="openshift-marketplace/community-operators-mxvh5" Feb 02 14:07:50 crc kubenswrapper[4955]: I0202 14:07:50.232535 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlgsb\" (UniqueName: \"kubernetes.io/projected/4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e-kube-api-access-wlgsb\") pod \"community-operators-mxvh5\" (UID: \"4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e\") " pod="openshift-marketplace/community-operators-mxvh5" Feb 02 14:07:50 crc kubenswrapper[4955]: I0202 14:07:50.329492 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mxvh5" Feb 02 14:07:50 crc kubenswrapper[4955]: I0202 14:07:50.797831 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mxvh5"] Feb 02 14:07:50 crc kubenswrapper[4955]: I0202 14:07:50.987180 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mxvh5" event={"ID":"4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e","Type":"ContainerStarted","Data":"7957b3721273800efa9f9699c276c0bc360d40899bcf11bb6fe776b068d161a6"} Feb 02 14:07:51 crc kubenswrapper[4955]: I0202 14:07:51.996659 4955 generic.go:334] "Generic (PLEG): container finished" podID="4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e" containerID="8e87b7198f921e93895832f192958557848d6ef34dbf612716b003492350af6e" exitCode=0 Feb 02 14:07:51 crc kubenswrapper[4955]: I0202 14:07:51.996758 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mxvh5" event={"ID":"4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e","Type":"ContainerDied","Data":"8e87b7198f921e93895832f192958557848d6ef34dbf612716b003492350af6e"} Feb 02 14:07:54 crc kubenswrapper[4955]: I0202 14:07:54.030387 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mxvh5" event={"ID":"4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e","Type":"ContainerStarted","Data":"0cd0038cd77dc8d962cd24f7ab3bbbbc079073d867ec1e229048284e91269539"} Feb 02 14:07:55 crc kubenswrapper[4955]: I0202 14:07:55.041092 4955 generic.go:334] "Generic (PLEG): container finished" podID="4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e" containerID="0cd0038cd77dc8d962cd24f7ab3bbbbc079073d867ec1e229048284e91269539" exitCode=0 Feb 02 14:07:55 crc kubenswrapper[4955]: I0202 14:07:55.041192 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mxvh5" event={"ID":"4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e","Type":"ContainerDied","Data":"0cd0038cd77dc8d962cd24f7ab3bbbbc079073d867ec1e229048284e91269539"} Feb 02 14:07:56 crc kubenswrapper[4955]: I0202 14:07:56.053693 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mxvh5" event={"ID":"4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e","Type":"ContainerStarted","Data":"e10ddbe7fc863d4314afa0d5a67d82643e818a6a4898e8849459db2e73d53fb6"} Feb 02 14:07:56 crc kubenswrapper[4955]: I0202 14:07:56.084522 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mxvh5" podStartSLOduration=3.55748066 podStartE2EDuration="7.08449821s" podCreationTimestamp="2026-02-02 14:07:49 +0000 UTC" firstStartedPulling="2026-02-02 14:07:51.999127068 +0000 UTC m=+3922.911463518" lastFinishedPulling="2026-02-02 14:07:55.526144618 +0000 UTC m=+3926.438481068" observedRunningTime="2026-02-02 14:07:56.080092993 +0000 UTC m=+3926.992429443" watchObservedRunningTime="2026-02-02 14:07:56.08449821 +0000 UTC m=+3926.996834660" Feb 02 14:08:00 crc kubenswrapper[4955]: I0202 14:08:00.330113 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mxvh5" Feb 02 14:08:00 crc kubenswrapper[4955]: I0202 14:08:00.330672 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mxvh5" Feb 02 14:08:00 crc kubenswrapper[4955]: I0202 14:08:00.380020 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mxvh5" Feb 02 14:08:01 crc kubenswrapper[4955]: I0202 14:08:01.174987 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mxvh5" Feb 02 14:08:01 crc kubenswrapper[4955]: I0202 14:08:01.220184 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xh9w4_8458e825-0591-415f-960d-06357c721b4c/control-plane-machine-set-operator/0.log" Feb 02 14:08:01 crc kubenswrapper[4955]: I0202 14:08:01.221797 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mxvh5"] Feb 02 14:08:01 crc kubenswrapper[4955]: I0202 14:08:01.492846 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tkkbx_a1923bcc-1d3a-4205-807e-fdf37f3b08ea/kube-rbac-proxy/0.log" Feb 02 14:08:01 crc kubenswrapper[4955]: I0202 14:08:01.502035 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tkkbx_a1923bcc-1d3a-4205-807e-fdf37f3b08ea/machine-api-operator/0.log" Feb 02 14:08:03 crc kubenswrapper[4955]: I0202 14:08:03.165740 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mxvh5" podUID="4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e" containerName="registry-server" containerID="cri-o://e10ddbe7fc863d4314afa0d5a67d82643e818a6a4898e8849459db2e73d53fb6" gracePeriod=2 Feb 02 14:08:03 crc kubenswrapper[4955]: I0202 14:08:03.661459 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mxvh5" Feb 02 14:08:03 crc kubenswrapper[4955]: I0202 14:08:03.797750 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e-catalog-content\") pod \"4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e\" (UID: \"4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e\") " Feb 02 14:08:03 crc kubenswrapper[4955]: I0202 14:08:03.797954 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e-utilities\") pod \"4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e\" (UID: \"4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e\") " Feb 02 14:08:03 crc kubenswrapper[4955]: I0202 14:08:03.798122 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlgsb\" (UniqueName: \"kubernetes.io/projected/4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e-kube-api-access-wlgsb\") pod \"4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e\" (UID: \"4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e\") " Feb 02 14:08:03 crc kubenswrapper[4955]: I0202 14:08:03.798934 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e-utilities" (OuterVolumeSpecName: "utilities") pod "4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e" (UID: "4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:08:03 crc kubenswrapper[4955]: I0202 14:08:03.813537 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e-kube-api-access-wlgsb" (OuterVolumeSpecName: "kube-api-access-wlgsb") pod "4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e" (UID: "4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e"). InnerVolumeSpecName "kube-api-access-wlgsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:08:03 crc kubenswrapper[4955]: I0202 14:08:03.894700 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e" (UID: "4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:08:03 crc kubenswrapper[4955]: I0202 14:08:03.904923 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:08:03 crc kubenswrapper[4955]: I0202 14:08:03.904958 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlgsb\" (UniqueName: \"kubernetes.io/projected/4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e-kube-api-access-wlgsb\") on node \"crc\" DevicePath \"\"" Feb 02 14:08:03 crc kubenswrapper[4955]: I0202 14:08:03.904970 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:08:04 crc kubenswrapper[4955]: I0202 14:08:04.174170 4955 generic.go:334] "Generic (PLEG): container finished" podID="4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e" containerID="e10ddbe7fc863d4314afa0d5a67d82643e818a6a4898e8849459db2e73d53fb6" exitCode=0 Feb 02 14:08:04 crc kubenswrapper[4955]: I0202 14:08:04.174224 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mxvh5" event={"ID":"4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e","Type":"ContainerDied","Data":"e10ddbe7fc863d4314afa0d5a67d82643e818a6a4898e8849459db2e73d53fb6"} Feb 02 14:08:04 crc kubenswrapper[4955]: I0202 14:08:04.174248 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mxvh5" Feb 02 14:08:04 crc kubenswrapper[4955]: I0202 14:08:04.174259 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mxvh5" event={"ID":"4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e","Type":"ContainerDied","Data":"7957b3721273800efa9f9699c276c0bc360d40899bcf11bb6fe776b068d161a6"} Feb 02 14:08:04 crc kubenswrapper[4955]: I0202 14:08:04.174281 4955 scope.go:117] "RemoveContainer" containerID="e10ddbe7fc863d4314afa0d5a67d82643e818a6a4898e8849459db2e73d53fb6" Feb 02 14:08:04 crc kubenswrapper[4955]: I0202 14:08:04.206546 4955 scope.go:117] "RemoveContainer" containerID="0cd0038cd77dc8d962cd24f7ab3bbbbc079073d867ec1e229048284e91269539" Feb 02 14:08:04 crc kubenswrapper[4955]: I0202 14:08:04.219445 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mxvh5"] Feb 02 14:08:04 crc kubenswrapper[4955]: I0202 14:08:04.235163 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mxvh5"] Feb 02 14:08:04 crc kubenswrapper[4955]: I0202 14:08:04.236163 4955 scope.go:117] "RemoveContainer" containerID="8e87b7198f921e93895832f192958557848d6ef34dbf612716b003492350af6e" Feb 02 14:08:04 crc kubenswrapper[4955]: I0202 14:08:04.275299 4955 scope.go:117] "RemoveContainer" containerID="e10ddbe7fc863d4314afa0d5a67d82643e818a6a4898e8849459db2e73d53fb6" Feb 02 14:08:04 crc kubenswrapper[4955]: E0202 14:08:04.275887 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e10ddbe7fc863d4314afa0d5a67d82643e818a6a4898e8849459db2e73d53fb6\": container with ID starting with e10ddbe7fc863d4314afa0d5a67d82643e818a6a4898e8849459db2e73d53fb6 not found: ID does not exist" containerID="e10ddbe7fc863d4314afa0d5a67d82643e818a6a4898e8849459db2e73d53fb6" Feb 02 14:08:04 crc kubenswrapper[4955]: I0202 14:08:04.275941 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e10ddbe7fc863d4314afa0d5a67d82643e818a6a4898e8849459db2e73d53fb6"} err="failed to get container status \"e10ddbe7fc863d4314afa0d5a67d82643e818a6a4898e8849459db2e73d53fb6\": rpc error: code = NotFound desc = could not find container \"e10ddbe7fc863d4314afa0d5a67d82643e818a6a4898e8849459db2e73d53fb6\": container with ID starting with e10ddbe7fc863d4314afa0d5a67d82643e818a6a4898e8849459db2e73d53fb6 not found: ID does not exist" Feb 02 14:08:04 crc kubenswrapper[4955]: I0202 14:08:04.275974 4955 scope.go:117] "RemoveContainer" containerID="0cd0038cd77dc8d962cd24f7ab3bbbbc079073d867ec1e229048284e91269539" Feb 02 14:08:04 crc kubenswrapper[4955]: E0202 14:08:04.276335 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cd0038cd77dc8d962cd24f7ab3bbbbc079073d867ec1e229048284e91269539\": container with ID starting with 0cd0038cd77dc8d962cd24f7ab3bbbbc079073d867ec1e229048284e91269539 not found: ID does not exist" containerID="0cd0038cd77dc8d962cd24f7ab3bbbbc079073d867ec1e229048284e91269539" Feb 02 14:08:04 crc kubenswrapper[4955]: I0202 14:08:04.276362 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cd0038cd77dc8d962cd24f7ab3bbbbc079073d867ec1e229048284e91269539"} err="failed to get container status \"0cd0038cd77dc8d962cd24f7ab3bbbbc079073d867ec1e229048284e91269539\": rpc error: code = NotFound desc = could not find container \"0cd0038cd77dc8d962cd24f7ab3bbbbc079073d867ec1e229048284e91269539\": container with ID starting with 0cd0038cd77dc8d962cd24f7ab3bbbbc079073d867ec1e229048284e91269539 not found: ID does not exist" Feb 02 14:08:04 crc kubenswrapper[4955]: I0202 14:08:04.276379 4955 scope.go:117] "RemoveContainer" containerID="8e87b7198f921e93895832f192958557848d6ef34dbf612716b003492350af6e" Feb 02 14:08:04 crc kubenswrapper[4955]: E0202 14:08:04.276689 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e87b7198f921e93895832f192958557848d6ef34dbf612716b003492350af6e\": container with ID starting with 8e87b7198f921e93895832f192958557848d6ef34dbf612716b003492350af6e not found: ID does not exist" containerID="8e87b7198f921e93895832f192958557848d6ef34dbf612716b003492350af6e" Feb 02 14:08:04 crc kubenswrapper[4955]: I0202 14:08:04.276714 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e87b7198f921e93895832f192958557848d6ef34dbf612716b003492350af6e"} err="failed to get container status \"8e87b7198f921e93895832f192958557848d6ef34dbf612716b003492350af6e\": rpc error: code = NotFound desc = could not find container \"8e87b7198f921e93895832f192958557848d6ef34dbf612716b003492350af6e\": container with ID starting with 8e87b7198f921e93895832f192958557848d6ef34dbf612716b003492350af6e not found: ID does not exist" Feb 02 14:08:05 crc kubenswrapper[4955]: I0202 14:08:05.727720 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e" path="/var/lib/kubelet/pods/4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e/volumes" Feb 02 14:08:15 crc kubenswrapper[4955]: I0202 14:08:15.290055 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-f9lzz_2d029f2e-d391-4487-9c7d-1141c569de70/cert-manager-controller/0.log" Feb 02 14:08:15 crc kubenswrapper[4955]: I0202 14:08:15.532053 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-8j9ng_b9a86428-fca6-4380-88ff-785b5710dc8d/cert-manager-cainjector/0.log" Feb 02 14:08:15 crc kubenswrapper[4955]: I0202 14:08:15.566496 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-2t7cg_07c0146f-f36a-4055-8cad-b6c65b94ddf4/cert-manager-webhook/0.log" Feb 02 14:08:28 crc kubenswrapper[4955]: I0202 14:08:28.696600 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-hpkxf_34e639ba-ce03-41b0-a48f-04c959db2204/nmstate-console-plugin/0.log" Feb 02 14:08:28 crc kubenswrapper[4955]: I0202 14:08:28.892349 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ztws6_af032e86-91c3-4253-b263-3aab67e04b81/nmstate-handler/0.log" Feb 02 14:08:28 crc kubenswrapper[4955]: I0202 14:08:28.921717 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-gqqzg_2cad4ce5-2270-4724-b134-9d96c52a68ab/kube-rbac-proxy/0.log" Feb 02 14:08:29 crc kubenswrapper[4955]: I0202 14:08:29.100170 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-gqqzg_2cad4ce5-2270-4724-b134-9d96c52a68ab/nmstate-metrics/0.log" Feb 02 14:08:29 crc kubenswrapper[4955]: I0202 14:08:29.118265 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-dnzw4_e8f7b05a-acde-41cb-abf2-ea46b972fe09/nmstate-operator/0.log" Feb 02 14:08:29 crc kubenswrapper[4955]: I0202 14:08:29.317768 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-5kxd7_861649ff-e626-4bd9-84ae-350085318d2e/nmstate-webhook/0.log" Feb 02 14:08:43 crc kubenswrapper[4955]: I0202 14:08:43.934549 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-27mc7_e6cc386e-df97-4820-8215-ad295f03667a/prometheus-operator/0.log" Feb 02 14:08:44 crc kubenswrapper[4955]: I0202 14:08:44.127694 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d54db95d4-7hvlk_853d1cb1-23c3-44ac-8b6d-5b645a3757f7/prometheus-operator-admission-webhook/0.log" Feb 02 14:08:44 crc kubenswrapper[4955]: I0202 14:08:44.136422 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d54db95d4-qlxnv_98fef4d4-9fa6-4f3b-8f57-ca22ea29b499/prometheus-operator-admission-webhook/0.log" Feb 02 14:08:44 crc kubenswrapper[4955]: I0202 14:08:44.508744 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-8jmvb_c857eb66-83ed-4042-9cdf-371ab6f7cbba/perses-operator/0.log" Feb 02 14:08:44 crc kubenswrapper[4955]: I0202 14:08:44.939709 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-hr5gh_102afca0-ebad-4201-86e1-a07bff67d684/operator/0.log" Feb 02 14:08:59 crc kubenswrapper[4955]: I0202 14:08:59.808270 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-zq68v_c06f0638-2793-4677-bba3-ef54f0b70498/kube-rbac-proxy/0.log" Feb 02 14:08:59 crc kubenswrapper[4955]: I0202 14:08:59.859569 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-zq68v_c06f0638-2793-4677-bba3-ef54f0b70498/controller/0.log" Feb 02 14:09:00 crc kubenswrapper[4955]: I0202 14:09:00.044855 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxh79_40d0d0f1-33ab-4255-8608-7dcfdfde094e/cp-frr-files/0.log" Feb 02 14:09:00 crc kubenswrapper[4955]: I0202 14:09:00.245843 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxh79_40d0d0f1-33ab-4255-8608-7dcfdfde094e/cp-frr-files/0.log" Feb 02 14:09:00 crc kubenswrapper[4955]: I0202 14:09:00.256519 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxh79_40d0d0f1-33ab-4255-8608-7dcfdfde094e/cp-reloader/0.log" Feb 02 14:09:00 crc kubenswrapper[4955]: I0202 14:09:00.302392 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxh79_40d0d0f1-33ab-4255-8608-7dcfdfde094e/cp-metrics/0.log" Feb 02 14:09:00 crc kubenswrapper[4955]: I0202 14:09:00.344172 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxh79_40d0d0f1-33ab-4255-8608-7dcfdfde094e/cp-reloader/0.log" Feb 02 14:09:01 crc kubenswrapper[4955]: I0202 14:09:01.043342 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxh79_40d0d0f1-33ab-4255-8608-7dcfdfde094e/cp-frr-files/0.log" Feb 02 14:09:01 crc kubenswrapper[4955]: I0202 14:09:01.086967 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxh79_40d0d0f1-33ab-4255-8608-7dcfdfde094e/cp-reloader/0.log" Feb 02 14:09:01 crc kubenswrapper[4955]: I0202 14:09:01.111669 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxh79_40d0d0f1-33ab-4255-8608-7dcfdfde094e/cp-metrics/0.log" Feb 02 14:09:01 crc kubenswrapper[4955]: I0202 14:09:01.123937 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxh79_40d0d0f1-33ab-4255-8608-7dcfdfde094e/cp-metrics/0.log" Feb 02 14:09:01 crc kubenswrapper[4955]: I0202 14:09:01.279138 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxh79_40d0d0f1-33ab-4255-8608-7dcfdfde094e/cp-frr-files/0.log" Feb 02 14:09:01 crc kubenswrapper[4955]: I0202 14:09:01.285826 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxh79_40d0d0f1-33ab-4255-8608-7dcfdfde094e/cp-reloader/0.log" Feb 02 14:09:01 crc kubenswrapper[4955]: I0202 14:09:01.311435 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxh79_40d0d0f1-33ab-4255-8608-7dcfdfde094e/cp-metrics/0.log" Feb 02 14:09:01 crc kubenswrapper[4955]: I0202 14:09:01.325651 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxh79_40d0d0f1-33ab-4255-8608-7dcfdfde094e/controller/0.log" Feb 02 14:09:01 crc kubenswrapper[4955]: I0202 14:09:01.478912 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxh79_40d0d0f1-33ab-4255-8608-7dcfdfde094e/frr-metrics/0.log" Feb 02 14:09:01 crc kubenswrapper[4955]: I0202 14:09:01.523424 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxh79_40d0d0f1-33ab-4255-8608-7dcfdfde094e/kube-rbac-proxy/0.log" Feb 02 14:09:01 crc kubenswrapper[4955]: I0202 14:09:01.549431 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxh79_40d0d0f1-33ab-4255-8608-7dcfdfde094e/kube-rbac-proxy-frr/0.log" Feb 02 14:09:01 crc kubenswrapper[4955]: I0202 14:09:01.757478 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxh79_40d0d0f1-33ab-4255-8608-7dcfdfde094e/reloader/0.log" Feb 02 14:09:01 crc kubenswrapper[4955]: I0202 14:09:01.798592 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-fsws2_0ce35428-6892-45fa-9455-6fc86652c803/frr-k8s-webhook-server/0.log" Feb 02 14:09:02 crc kubenswrapper[4955]: I0202 14:09:02.065661 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-67485f44d-qvtcv_cb55affd-be36-4e4e-8757-2523cb995f32/manager/0.log" Feb 02 14:09:02 crc kubenswrapper[4955]: I0202 14:09:02.247893 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-69d59c6b57-c75mt_486ac9ae-6679-4f9a-87cf-5fbba490e986/webhook-server/0.log" Feb 02 14:09:02 crc kubenswrapper[4955]: I0202 14:09:02.417849 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qhbgd_a6d58a99-7496-48ea-9a41-dec05d17a5be/kube-rbac-proxy/0.log" Feb 02 14:09:02 crc kubenswrapper[4955]: I0202 14:09:02.884003 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qhbgd_a6d58a99-7496-48ea-9a41-dec05d17a5be/speaker/0.log" Feb 02 14:09:03 crc kubenswrapper[4955]: I0202 14:09:03.155516 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxh79_40d0d0f1-33ab-4255-8608-7dcfdfde094e/frr/0.log" Feb 02 14:09:15 crc kubenswrapper[4955]: I0202 14:09:15.468056 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl_be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c/util/0.log" Feb 02 14:09:15 crc kubenswrapper[4955]: I0202 14:09:15.656445 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl_be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c/util/0.log" Feb 02 14:09:15 crc kubenswrapper[4955]: I0202 14:09:15.656915 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl_be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c/pull/0.log" Feb 02 14:09:15 crc kubenswrapper[4955]: I0202 14:09:15.737263 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl_be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c/pull/0.log" Feb 02 14:09:15 crc kubenswrapper[4955]: I0202 14:09:15.905783 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl_be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c/util/0.log" Feb 02 14:09:15 crc kubenswrapper[4955]: I0202 14:09:15.913952 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl_be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c/extract/0.log" Feb 02 14:09:15 crc kubenswrapper[4955]: I0202 14:09:15.918269 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9z9rl_be1d73da-a2bf-4a1f-91a1-dd5eaf8e077c/pull/0.log" Feb 02 14:09:16 crc kubenswrapper[4955]: I0202 14:09:16.102126 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v_4dc45f9b-da78-485e-baf2-87572c369b4f/util/0.log" Feb 02 14:09:16 crc kubenswrapper[4955]: I0202 14:09:16.253269 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v_4dc45f9b-da78-485e-baf2-87572c369b4f/util/0.log" Feb 02 14:09:16 crc kubenswrapper[4955]: I0202 14:09:16.254795 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v_4dc45f9b-da78-485e-baf2-87572c369b4f/pull/0.log" Feb 02 14:09:16 crc kubenswrapper[4955]: I0202 14:09:16.278766 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v_4dc45f9b-da78-485e-baf2-87572c369b4f/pull/0.log" Feb 02 14:09:16 crc kubenswrapper[4955]: I0202 14:09:16.434708 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v_4dc45f9b-da78-485e-baf2-87572c369b4f/util/0.log" Feb 02 14:09:16 crc kubenswrapper[4955]: I0202 14:09:16.442931 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v_4dc45f9b-da78-485e-baf2-87572c369b4f/pull/0.log" Feb 02 14:09:16 crc kubenswrapper[4955]: I0202 14:09:16.451941 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c5j6v_4dc45f9b-da78-485e-baf2-87572c369b4f/extract/0.log" Feb 02 14:09:16 crc kubenswrapper[4955]: I0202 14:09:16.609250 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4_4dbf1102-145e-49b6-87bd-1a7ce30e48b9/util/0.log" Feb 02 14:09:16 crc kubenswrapper[4955]: I0202 14:09:16.773907 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4_4dbf1102-145e-49b6-87bd-1a7ce30e48b9/util/0.log" Feb 02 14:09:16 crc kubenswrapper[4955]: I0202 14:09:16.807375 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4_4dbf1102-145e-49b6-87bd-1a7ce30e48b9/pull/0.log" Feb 02 14:09:16 crc kubenswrapper[4955]: I0202 14:09:16.817059 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4_4dbf1102-145e-49b6-87bd-1a7ce30e48b9/pull/0.log" Feb 02 14:09:16 crc kubenswrapper[4955]: I0202 14:09:16.983435 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4_4dbf1102-145e-49b6-87bd-1a7ce30e48b9/pull/0.log" Feb 02 14:09:17 crc kubenswrapper[4955]: I0202 14:09:17.012658 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4_4dbf1102-145e-49b6-87bd-1a7ce30e48b9/util/0.log" Feb 02 14:09:17 crc kubenswrapper[4955]: I0202 14:09:17.033061 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088bml4_4dbf1102-145e-49b6-87bd-1a7ce30e48b9/extract/0.log" Feb 02 14:09:17 crc kubenswrapper[4955]: I0202 14:09:17.197286 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qkf4x_eda30c6d-d71c-417b-8434-ff87281d64c7/extract-utilities/0.log" Feb 02 14:09:17 crc kubenswrapper[4955]: I0202 14:09:17.358573 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qkf4x_eda30c6d-d71c-417b-8434-ff87281d64c7/extract-content/0.log" Feb 02 14:09:17 crc kubenswrapper[4955]: I0202 14:09:17.383465 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qkf4x_eda30c6d-d71c-417b-8434-ff87281d64c7/extract-content/0.log" Feb 02 14:09:17 crc kubenswrapper[4955]: I0202 14:09:17.387153 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qkf4x_eda30c6d-d71c-417b-8434-ff87281d64c7/extract-utilities/0.log" Feb 02 14:09:17 crc kubenswrapper[4955]: I0202 14:09:17.595008 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qkf4x_eda30c6d-d71c-417b-8434-ff87281d64c7/extract-content/0.log" Feb 02 14:09:17 crc kubenswrapper[4955]: I0202 14:09:17.645926 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qkf4x_eda30c6d-d71c-417b-8434-ff87281d64c7/extract-utilities/0.log" Feb 02 14:09:17 crc kubenswrapper[4955]: I0202 14:09:17.833656 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kxp4p_99aa1753-769d-42bf-92e4-751a1dfdf0be/extract-utilities/0.log" Feb 02 14:09:18 crc kubenswrapper[4955]: I0202 14:09:18.032258 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qkf4x_eda30c6d-d71c-417b-8434-ff87281d64c7/registry-server/0.log" Feb 02 14:09:18 crc kubenswrapper[4955]: I0202 14:09:18.042869 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kxp4p_99aa1753-769d-42bf-92e4-751a1dfdf0be/extract-content/0.log" Feb 02 14:09:18 crc kubenswrapper[4955]: I0202 14:09:18.070086 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kxp4p_99aa1753-769d-42bf-92e4-751a1dfdf0be/extract-content/0.log" Feb 02 14:09:18 crc kubenswrapper[4955]: I0202 14:09:18.114231 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kxp4p_99aa1753-769d-42bf-92e4-751a1dfdf0be/extract-utilities/0.log" Feb 02 14:09:18 crc kubenswrapper[4955]: I0202 14:09:18.248076 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kxp4p_99aa1753-769d-42bf-92e4-751a1dfdf0be/extract-utilities/0.log" Feb 02 14:09:18 crc kubenswrapper[4955]: I0202 14:09:18.248680 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kxp4p_99aa1753-769d-42bf-92e4-751a1dfdf0be/extract-content/0.log" Feb 02 14:09:18 crc kubenswrapper[4955]: I0202 14:09:18.492857 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-tcjcw_c95ff8c4-bd53-45dc-85fb-3292fbd52e0f/marketplace-operator/0.log" Feb 02 14:09:18 crc kubenswrapper[4955]: I0202 14:09:18.620628 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-24bdn_a77ac3de-f22e-420e-8fc2-167c9433d128/extract-utilities/0.log" Feb 02 14:09:18 crc kubenswrapper[4955]: I0202 14:09:18.871926 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kxp4p_99aa1753-769d-42bf-92e4-751a1dfdf0be/registry-server/0.log" Feb 02 14:09:19 crc kubenswrapper[4955]: I0202 14:09:19.268304 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-24bdn_a77ac3de-f22e-420e-8fc2-167c9433d128/extract-content/0.log" Feb 02 14:09:19 crc kubenswrapper[4955]: I0202 14:09:19.289740 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-24bdn_a77ac3de-f22e-420e-8fc2-167c9433d128/extract-content/0.log" Feb 02 14:09:19 crc kubenswrapper[4955]: I0202 14:09:19.292304 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-24bdn_a77ac3de-f22e-420e-8fc2-167c9433d128/extract-utilities/0.log" Feb 02 14:09:19 crc kubenswrapper[4955]: I0202 14:09:19.713773 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-24bdn_a77ac3de-f22e-420e-8fc2-167c9433d128/extract-utilities/0.log" Feb 02 14:09:19 crc kubenswrapper[4955]: I0202 14:09:19.714349 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-24bdn_a77ac3de-f22e-420e-8fc2-167c9433d128/extract-content/0.log" Feb 02 14:09:19 crc kubenswrapper[4955]: I0202 14:09:19.804473 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-24bdn_a77ac3de-f22e-420e-8fc2-167c9433d128/registry-server/0.log" Feb 02 14:09:19 crc kubenswrapper[4955]: I0202 14:09:19.969842 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-22rg9_d3887b11-d447-46f7-844d-d07d4a1d180c/extract-utilities/0.log" Feb 02 14:09:20 crc kubenswrapper[4955]: I0202 14:09:20.126245 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-22rg9_d3887b11-d447-46f7-844d-d07d4a1d180c/extract-utilities/0.log" Feb 02 14:09:20 crc kubenswrapper[4955]: I0202 14:09:20.162360 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-22rg9_d3887b11-d447-46f7-844d-d07d4a1d180c/extract-content/0.log" Feb 02 14:09:20 crc kubenswrapper[4955]: I0202 14:09:20.193273 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-22rg9_d3887b11-d447-46f7-844d-d07d4a1d180c/extract-content/0.log" Feb 02 14:09:20 crc kubenswrapper[4955]: I0202 14:09:20.398077 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-22rg9_d3887b11-d447-46f7-844d-d07d4a1d180c/extract-utilities/0.log" Feb 02 14:09:20 crc kubenswrapper[4955]: I0202 14:09:20.961922 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-22rg9_d3887b11-d447-46f7-844d-d07d4a1d180c/extract-content/0.log" Feb 02 14:09:21 crc kubenswrapper[4955]: I0202 14:09:21.257453 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-22rg9_d3887b11-d447-46f7-844d-d07d4a1d180c/registry-server/0.log" Feb 02 14:09:33 crc kubenswrapper[4955]: I0202 14:09:33.016545 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:09:33 crc kubenswrapper[4955]: I0202 14:09:33.018213 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:09:33 crc kubenswrapper[4955]: I0202 14:09:33.801299 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-27mc7_e6cc386e-df97-4820-8215-ad295f03667a/prometheus-operator/0.log" Feb 02 14:09:33 crc kubenswrapper[4955]: I0202 14:09:33.837503 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d54db95d4-7hvlk_853d1cb1-23c3-44ac-8b6d-5b645a3757f7/prometheus-operator-admission-webhook/0.log" Feb 02 14:09:33 crc kubenswrapper[4955]: I0202 14:09:33.897939 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d54db95d4-qlxnv_98fef4d4-9fa6-4f3b-8f57-ca22ea29b499/prometheus-operator-admission-webhook/0.log" Feb 02 14:09:34 crc kubenswrapper[4955]: I0202 14:09:34.033387 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-hr5gh_102afca0-ebad-4201-86e1-a07bff67d684/operator/0.log" Feb 02 14:09:34 crc kubenswrapper[4955]: I0202 14:09:34.056810 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-8jmvb_c857eb66-83ed-4042-9cdf-371ab6f7cbba/perses-operator/0.log" Feb 02 14:10:03 crc kubenswrapper[4955]: I0202 14:10:03.017142 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:10:03 crc kubenswrapper[4955]: I0202 14:10:03.017692 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:10:33 crc kubenswrapper[4955]: I0202 14:10:33.016889 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:10:33 crc kubenswrapper[4955]: I0202 14:10:33.017452 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:10:33 crc kubenswrapper[4955]: I0202 14:10:33.017499 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 14:10:33 crc kubenswrapper[4955]: I0202 14:10:33.018310 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"917fe2126cbb3a7d8937c8ede6f6e0eb83a794f53a436a37d5cee320374819f9"} pod="openshift-machine-config-operator/machine-config-daemon-6l62h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:10:33 crc kubenswrapper[4955]: I0202 14:10:33.018369 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" containerID="cri-o://917fe2126cbb3a7d8937c8ede6f6e0eb83a794f53a436a37d5cee320374819f9" gracePeriod=600 Feb 02 14:10:33 crc kubenswrapper[4955]: I0202 14:10:33.584005 4955 generic.go:334] "Generic (PLEG): container finished" podID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerID="917fe2126cbb3a7d8937c8ede6f6e0eb83a794f53a436a37d5cee320374819f9" exitCode=0 Feb 02 14:10:33 crc kubenswrapper[4955]: I0202 14:10:33.584214 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerDied","Data":"917fe2126cbb3a7d8937c8ede6f6e0eb83a794f53a436a37d5cee320374819f9"} Feb 02 14:10:33 crc kubenswrapper[4955]: I0202 14:10:33.584323 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerStarted","Data":"28f2a4b51b84392a2a442f7abba3b9ae56f3e4829ff8b989a19adcbad6f16d0d"} Feb 02 14:10:33 crc kubenswrapper[4955]: I0202 14:10:33.584346 4955 scope.go:117] "RemoveContainer" containerID="6f8ca7aeec78ef855371f1792ed5e4fa6e6b202fefae75931e72a4fb059d89c3" Feb 02 14:11:13 crc kubenswrapper[4955]: I0202 14:11:13.949612 4955 generic.go:334] "Generic (PLEG): container finished" podID="bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1" containerID="843ebeebc7124f03620c2ce7d55dddc491ec926dd3cf1a53fbcaf6f0097ead3f" exitCode=0 Feb 02 14:11:13 crc kubenswrapper[4955]: I0202 14:11:13.949680 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bshf/must-gather-g9hv8" event={"ID":"bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1","Type":"ContainerDied","Data":"843ebeebc7124f03620c2ce7d55dddc491ec926dd3cf1a53fbcaf6f0097ead3f"} Feb 02 14:11:13 crc kubenswrapper[4955]: I0202 14:11:13.950841 4955 scope.go:117] "RemoveContainer" containerID="843ebeebc7124f03620c2ce7d55dddc491ec926dd3cf1a53fbcaf6f0097ead3f" Feb 02 14:11:14 crc kubenswrapper[4955]: I0202 14:11:14.776593 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6bshf_must-gather-g9hv8_bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1/gather/0.log" Feb 02 14:11:23 crc kubenswrapper[4955]: I0202 14:11:23.331405 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6bshf/must-gather-g9hv8"] Feb 02 14:11:23 crc kubenswrapper[4955]: I0202 14:11:23.332118 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-6bshf/must-gather-g9hv8" podUID="bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1" containerName="copy" containerID="cri-o://2a775b54f460c6ce9be125ab63bdcdfcc1d0f2ac30cca0af19e68c6c6108ad72" gracePeriod=2 Feb 02 14:11:23 crc kubenswrapper[4955]: I0202 14:11:23.342151 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6bshf/must-gather-g9hv8"] Feb 02 14:11:23 crc kubenswrapper[4955]: I0202 14:11:23.655367 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-grj2n"] Feb 02 14:11:23 crc kubenswrapper[4955]: E0202 14:11:23.656397 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e" containerName="extract-content" Feb 02 14:11:23 crc kubenswrapper[4955]: I0202 14:11:23.656418 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e" containerName="extract-content" Feb 02 14:11:23 crc kubenswrapper[4955]: E0202 14:11:23.656435 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1" containerName="gather" Feb 02 14:11:23 crc kubenswrapper[4955]: I0202 14:11:23.656443 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1" containerName="gather" Feb 02 14:11:23 crc kubenswrapper[4955]: E0202 14:11:23.656457 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e" containerName="registry-server" Feb 02 14:11:23 crc kubenswrapper[4955]: I0202 14:11:23.656466 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e" containerName="registry-server" Feb 02 14:11:23 crc kubenswrapper[4955]: E0202 14:11:23.656493 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1" containerName="copy" Feb 02 14:11:23 crc kubenswrapper[4955]: I0202 14:11:23.656501 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1" containerName="copy" Feb 02 14:11:23 crc kubenswrapper[4955]: E0202 14:11:23.656512 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e" containerName="extract-utilities" Feb 02 14:11:23 crc kubenswrapper[4955]: I0202 14:11:23.656523 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e" containerName="extract-utilities" Feb 02 14:11:23 crc kubenswrapper[4955]: I0202 14:11:23.656789 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1" containerName="copy" Feb 02 14:11:23 crc kubenswrapper[4955]: I0202 14:11:23.656802 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b6bb7a0-5c4c-4903-b4c8-4e454cc5796e" containerName="registry-server" Feb 02 14:11:23 crc kubenswrapper[4955]: I0202 14:11:23.656826 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1" containerName="gather" Feb 02 14:11:23 crc kubenswrapper[4955]: I0202 14:11:23.658526 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grj2n" Feb 02 14:11:23 crc kubenswrapper[4955]: I0202 14:11:23.678462 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-grj2n"] Feb 02 14:11:23 crc kubenswrapper[4955]: I0202 14:11:23.727512 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/958c54c3-2d4c-46ef-96e8-405d55991ecd-utilities\") pod \"redhat-marketplace-grj2n\" (UID: \"958c54c3-2d4c-46ef-96e8-405d55991ecd\") " pod="openshift-marketplace/redhat-marketplace-grj2n" Feb 02 14:11:23 crc kubenswrapper[4955]: I0202 14:11:23.728118 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z22s\" (UniqueName: \"kubernetes.io/projected/958c54c3-2d4c-46ef-96e8-405d55991ecd-kube-api-access-5z22s\") pod \"redhat-marketplace-grj2n\" (UID: \"958c54c3-2d4c-46ef-96e8-405d55991ecd\") " pod="openshift-marketplace/redhat-marketplace-grj2n" Feb 02 14:11:23 crc kubenswrapper[4955]: I0202 14:11:23.728256 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/958c54c3-2d4c-46ef-96e8-405d55991ecd-catalog-content\") pod \"redhat-marketplace-grj2n\" (UID: \"958c54c3-2d4c-46ef-96e8-405d55991ecd\") " pod="openshift-marketplace/redhat-marketplace-grj2n" Feb 02 14:11:23 crc kubenswrapper[4955]: I0202 14:11:23.830718 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/958c54c3-2d4c-46ef-96e8-405d55991ecd-catalog-content\") pod \"redhat-marketplace-grj2n\" (UID: \"958c54c3-2d4c-46ef-96e8-405d55991ecd\") " pod="openshift-marketplace/redhat-marketplace-grj2n" Feb 02 14:11:23 crc kubenswrapper[4955]: I0202 14:11:23.830966 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/958c54c3-2d4c-46ef-96e8-405d55991ecd-utilities\") pod \"redhat-marketplace-grj2n\" (UID: \"958c54c3-2d4c-46ef-96e8-405d55991ecd\") " pod="openshift-marketplace/redhat-marketplace-grj2n" Feb 02 14:11:23 crc kubenswrapper[4955]: I0202 14:11:23.831108 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z22s\" (UniqueName: \"kubernetes.io/projected/958c54c3-2d4c-46ef-96e8-405d55991ecd-kube-api-access-5z22s\") pod \"redhat-marketplace-grj2n\" (UID: \"958c54c3-2d4c-46ef-96e8-405d55991ecd\") " pod="openshift-marketplace/redhat-marketplace-grj2n" Feb 02 14:11:23 crc kubenswrapper[4955]: I0202 14:11:23.832810 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/958c54c3-2d4c-46ef-96e8-405d55991ecd-utilities\") pod \"redhat-marketplace-grj2n\" (UID: \"958c54c3-2d4c-46ef-96e8-405d55991ecd\") " pod="openshift-marketplace/redhat-marketplace-grj2n" Feb 02 14:11:23 crc kubenswrapper[4955]: I0202 14:11:23.833454 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/958c54c3-2d4c-46ef-96e8-405d55991ecd-catalog-content\") pod \"redhat-marketplace-grj2n\" (UID: \"958c54c3-2d4c-46ef-96e8-405d55991ecd\") " pod="openshift-marketplace/redhat-marketplace-grj2n" Feb 02 14:11:24 crc kubenswrapper[4955]: I0202 14:11:24.049307 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6bshf_must-gather-g9hv8_bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1/copy/0.log" Feb 02 14:11:24 crc kubenswrapper[4955]: I0202 14:11:24.049792 4955 generic.go:334] "Generic (PLEG): container finished" podID="bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1" containerID="2a775b54f460c6ce9be125ab63bdcdfcc1d0f2ac30cca0af19e68c6c6108ad72" exitCode=143 Feb 02 14:11:24 crc kubenswrapper[4955]: I0202 14:11:24.093958 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z22s\" (UniqueName: \"kubernetes.io/projected/958c54c3-2d4c-46ef-96e8-405d55991ecd-kube-api-access-5z22s\") pod \"redhat-marketplace-grj2n\" (UID: \"958c54c3-2d4c-46ef-96e8-405d55991ecd\") " pod="openshift-marketplace/redhat-marketplace-grj2n" Feb 02 14:11:24 crc kubenswrapper[4955]: I0202 14:11:24.251410 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6bshf_must-gather-g9hv8_bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1/copy/0.log" Feb 02 14:11:24 crc kubenswrapper[4955]: I0202 14:11:24.251849 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bshf/must-gather-g9hv8" Feb 02 14:11:24 crc kubenswrapper[4955]: I0202 14:11:24.298437 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grj2n" Feb 02 14:11:24 crc kubenswrapper[4955]: I0202 14:11:24.346871 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1-must-gather-output\") pod \"bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1\" (UID: \"bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1\") " Feb 02 14:11:24 crc kubenswrapper[4955]: I0202 14:11:24.347395 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp892\" (UniqueName: \"kubernetes.io/projected/bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1-kube-api-access-jp892\") pod \"bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1\" (UID: \"bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1\") " Feb 02 14:11:24 crc kubenswrapper[4955]: I0202 14:11:24.357956 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1-kube-api-access-jp892" (OuterVolumeSpecName: "kube-api-access-jp892") pod "bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1" (UID: "bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1"). InnerVolumeSpecName "kube-api-access-jp892". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:11:24 crc kubenswrapper[4955]: I0202 14:11:24.454316 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp892\" (UniqueName: \"kubernetes.io/projected/bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1-kube-api-access-jp892\") on node \"crc\" DevicePath \"\"" Feb 02 14:11:24 crc kubenswrapper[4955]: I0202 14:11:24.609455 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1" (UID: "bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:11:24 crc kubenswrapper[4955]: I0202 14:11:24.658103 4955 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 02 14:11:24 crc kubenswrapper[4955]: I0202 14:11:24.863711 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-grj2n"] Feb 02 14:11:25 crc kubenswrapper[4955]: I0202 14:11:25.065370 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grj2n" event={"ID":"958c54c3-2d4c-46ef-96e8-405d55991ecd","Type":"ContainerStarted","Data":"bda089e1fc0e41734571ceed51b79c4ee873bf5b74a55c648e79dc3a7dcca0a6"} Feb 02 14:11:25 crc kubenswrapper[4955]: I0202 14:11:25.068818 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6bshf_must-gather-g9hv8_bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1/copy/0.log" Feb 02 14:11:25 crc kubenswrapper[4955]: I0202 14:11:25.069452 4955 scope.go:117] "RemoveContainer" containerID="2a775b54f460c6ce9be125ab63bdcdfcc1d0f2ac30cca0af19e68c6c6108ad72" Feb 02 14:11:25 crc kubenswrapper[4955]: I0202 14:11:25.069502 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bshf/must-gather-g9hv8" Feb 02 14:11:25 crc kubenswrapper[4955]: I0202 14:11:25.105044 4955 scope.go:117] "RemoveContainer" containerID="843ebeebc7124f03620c2ce7d55dddc491ec926dd3cf1a53fbcaf6f0097ead3f" Feb 02 14:11:25 crc kubenswrapper[4955]: I0202 14:11:25.728878 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1" path="/var/lib/kubelet/pods/bb0f2b0c-c1f3-46a9-b352-08f5c870b1e1/volumes" Feb 02 14:11:26 crc kubenswrapper[4955]: I0202 14:11:26.081682 4955 generic.go:334] "Generic (PLEG): container finished" podID="958c54c3-2d4c-46ef-96e8-405d55991ecd" containerID="4cdd9b7c5fe3d5e76cca2296de628a05a80f696e7865f807402b2cb460e8ff57" exitCode=0 Feb 02 14:11:26 crc kubenswrapper[4955]: I0202 14:11:26.081748 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grj2n" event={"ID":"958c54c3-2d4c-46ef-96e8-405d55991ecd","Type":"ContainerDied","Data":"4cdd9b7c5fe3d5e76cca2296de628a05a80f696e7865f807402b2cb460e8ff57"} Feb 02 14:11:26 crc kubenswrapper[4955]: I0202 14:11:26.085761 4955 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 14:11:27 crc kubenswrapper[4955]: I0202 14:11:27.097217 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grj2n" event={"ID":"958c54c3-2d4c-46ef-96e8-405d55991ecd","Type":"ContainerStarted","Data":"c64ced0f7d3b0ad71a8c7e097e6093bdc769e49627b67e07f0fb012fbc081007"} Feb 02 14:11:29 crc kubenswrapper[4955]: I0202 14:11:29.120011 4955 generic.go:334] "Generic (PLEG): container finished" podID="958c54c3-2d4c-46ef-96e8-405d55991ecd" containerID="c64ced0f7d3b0ad71a8c7e097e6093bdc769e49627b67e07f0fb012fbc081007" exitCode=0 Feb 02 14:11:29 crc kubenswrapper[4955]: I0202 14:11:29.120134 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grj2n" event={"ID":"958c54c3-2d4c-46ef-96e8-405d55991ecd","Type":"ContainerDied","Data":"c64ced0f7d3b0ad71a8c7e097e6093bdc769e49627b67e07f0fb012fbc081007"} Feb 02 14:11:30 crc kubenswrapper[4955]: I0202 14:11:30.134446 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grj2n" event={"ID":"958c54c3-2d4c-46ef-96e8-405d55991ecd","Type":"ContainerStarted","Data":"ddd7b08ec5aa8d3aefc8d67238c4652bdd8dc8c6caa6955dd0ea79c43296de7b"} Feb 02 14:11:30 crc kubenswrapper[4955]: I0202 14:11:30.163531 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-grj2n" podStartSLOduration=3.536458109 podStartE2EDuration="7.163508427s" podCreationTimestamp="2026-02-02 14:11:23 +0000 UTC" firstStartedPulling="2026-02-02 14:11:26.085458426 +0000 UTC m=+4136.997794876" lastFinishedPulling="2026-02-02 14:11:29.712508744 +0000 UTC m=+4140.624845194" observedRunningTime="2026-02-02 14:11:30.151259748 +0000 UTC m=+4141.063596198" watchObservedRunningTime="2026-02-02 14:11:30.163508427 +0000 UTC m=+4141.075844877" Feb 02 14:11:34 crc kubenswrapper[4955]: I0202 14:11:34.298684 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-grj2n" Feb 02 14:11:34 crc kubenswrapper[4955]: I0202 14:11:34.299290 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-grj2n" Feb 02 14:11:34 crc kubenswrapper[4955]: I0202 14:11:34.346988 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-grj2n" Feb 02 14:11:35 crc kubenswrapper[4955]: I0202 14:11:35.628663 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-grj2n" Feb 02 14:11:35 crc kubenswrapper[4955]: I0202 14:11:35.682127 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-grj2n"] Feb 02 14:11:37 crc kubenswrapper[4955]: I0202 14:11:37.204451 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-grj2n" podUID="958c54c3-2d4c-46ef-96e8-405d55991ecd" containerName="registry-server" containerID="cri-o://ddd7b08ec5aa8d3aefc8d67238c4652bdd8dc8c6caa6955dd0ea79c43296de7b" gracePeriod=2 Feb 02 14:11:37 crc kubenswrapper[4955]: I0202 14:11:37.676868 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grj2n" Feb 02 14:11:37 crc kubenswrapper[4955]: I0202 14:11:37.731290 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/958c54c3-2d4c-46ef-96e8-405d55991ecd-catalog-content\") pod \"958c54c3-2d4c-46ef-96e8-405d55991ecd\" (UID: \"958c54c3-2d4c-46ef-96e8-405d55991ecd\") " Feb 02 14:11:37 crc kubenswrapper[4955]: I0202 14:11:37.731349 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z22s\" (UniqueName: \"kubernetes.io/projected/958c54c3-2d4c-46ef-96e8-405d55991ecd-kube-api-access-5z22s\") pod \"958c54c3-2d4c-46ef-96e8-405d55991ecd\" (UID: \"958c54c3-2d4c-46ef-96e8-405d55991ecd\") " Feb 02 14:11:37 crc kubenswrapper[4955]: I0202 14:11:37.731581 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/958c54c3-2d4c-46ef-96e8-405d55991ecd-utilities\") pod \"958c54c3-2d4c-46ef-96e8-405d55991ecd\" (UID: \"958c54c3-2d4c-46ef-96e8-405d55991ecd\") " Feb 02 14:11:37 crc kubenswrapper[4955]: I0202 14:11:37.733012 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/958c54c3-2d4c-46ef-96e8-405d55991ecd-utilities" (OuterVolumeSpecName: "utilities") pod "958c54c3-2d4c-46ef-96e8-405d55991ecd" (UID: "958c54c3-2d4c-46ef-96e8-405d55991ecd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:11:37 crc kubenswrapper[4955]: I0202 14:11:37.741900 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/958c54c3-2d4c-46ef-96e8-405d55991ecd-kube-api-access-5z22s" (OuterVolumeSpecName: "kube-api-access-5z22s") pod "958c54c3-2d4c-46ef-96e8-405d55991ecd" (UID: "958c54c3-2d4c-46ef-96e8-405d55991ecd"). InnerVolumeSpecName "kube-api-access-5z22s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:11:37 crc kubenswrapper[4955]: I0202 14:11:37.827342 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/958c54c3-2d4c-46ef-96e8-405d55991ecd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "958c54c3-2d4c-46ef-96e8-405d55991ecd" (UID: "958c54c3-2d4c-46ef-96e8-405d55991ecd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:11:37 crc kubenswrapper[4955]: I0202 14:11:37.838858 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/958c54c3-2d4c-46ef-96e8-405d55991ecd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:11:37 crc kubenswrapper[4955]: I0202 14:11:37.838888 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z22s\" (UniqueName: \"kubernetes.io/projected/958c54c3-2d4c-46ef-96e8-405d55991ecd-kube-api-access-5z22s\") on node \"crc\" DevicePath \"\"" Feb 02 14:11:37 crc kubenswrapper[4955]: I0202 14:11:37.838899 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/958c54c3-2d4c-46ef-96e8-405d55991ecd-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:11:38 crc kubenswrapper[4955]: I0202 14:11:38.217221 4955 generic.go:334] "Generic (PLEG): container finished" podID="958c54c3-2d4c-46ef-96e8-405d55991ecd" containerID="ddd7b08ec5aa8d3aefc8d67238c4652bdd8dc8c6caa6955dd0ea79c43296de7b" exitCode=0 Feb 02 14:11:38 crc kubenswrapper[4955]: I0202 14:11:38.217283 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grj2n" Feb 02 14:11:38 crc kubenswrapper[4955]: I0202 14:11:38.217302 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grj2n" event={"ID":"958c54c3-2d4c-46ef-96e8-405d55991ecd","Type":"ContainerDied","Data":"ddd7b08ec5aa8d3aefc8d67238c4652bdd8dc8c6caa6955dd0ea79c43296de7b"} Feb 02 14:11:38 crc kubenswrapper[4955]: I0202 14:11:38.219477 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grj2n" event={"ID":"958c54c3-2d4c-46ef-96e8-405d55991ecd","Type":"ContainerDied","Data":"bda089e1fc0e41734571ceed51b79c4ee873bf5b74a55c648e79dc3a7dcca0a6"} Feb 02 14:11:38 crc kubenswrapper[4955]: I0202 14:11:38.219499 4955 scope.go:117] "RemoveContainer" containerID="ddd7b08ec5aa8d3aefc8d67238c4652bdd8dc8c6caa6955dd0ea79c43296de7b" Feb 02 14:11:38 crc kubenswrapper[4955]: I0202 14:11:38.239174 4955 scope.go:117] "RemoveContainer" containerID="c64ced0f7d3b0ad71a8c7e097e6093bdc769e49627b67e07f0fb012fbc081007" Feb 02 14:11:38 crc kubenswrapper[4955]: I0202 14:11:38.267903 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-grj2n"] Feb 02 14:11:38 crc kubenswrapper[4955]: I0202 14:11:38.275330 4955 scope.go:117] "RemoveContainer" containerID="4cdd9b7c5fe3d5e76cca2296de628a05a80f696e7865f807402b2cb460e8ff57" Feb 02 14:11:38 crc kubenswrapper[4955]: I0202 14:11:38.282819 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-grj2n"] Feb 02 14:11:38 crc kubenswrapper[4955]: I0202 14:11:38.333379 4955 scope.go:117] "RemoveContainer" containerID="ddd7b08ec5aa8d3aefc8d67238c4652bdd8dc8c6caa6955dd0ea79c43296de7b" Feb 02 14:11:38 crc kubenswrapper[4955]: E0202 14:11:38.333861 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddd7b08ec5aa8d3aefc8d67238c4652bdd8dc8c6caa6955dd0ea79c43296de7b\": container with ID starting with ddd7b08ec5aa8d3aefc8d67238c4652bdd8dc8c6caa6955dd0ea79c43296de7b not found: ID does not exist" containerID="ddd7b08ec5aa8d3aefc8d67238c4652bdd8dc8c6caa6955dd0ea79c43296de7b" Feb 02 14:11:38 crc kubenswrapper[4955]: I0202 14:11:38.333933 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd7b08ec5aa8d3aefc8d67238c4652bdd8dc8c6caa6955dd0ea79c43296de7b"} err="failed to get container status \"ddd7b08ec5aa8d3aefc8d67238c4652bdd8dc8c6caa6955dd0ea79c43296de7b\": rpc error: code = NotFound desc = could not find container \"ddd7b08ec5aa8d3aefc8d67238c4652bdd8dc8c6caa6955dd0ea79c43296de7b\": container with ID starting with ddd7b08ec5aa8d3aefc8d67238c4652bdd8dc8c6caa6955dd0ea79c43296de7b not found: ID does not exist" Feb 02 14:11:38 crc kubenswrapper[4955]: I0202 14:11:38.333960 4955 scope.go:117] "RemoveContainer" containerID="c64ced0f7d3b0ad71a8c7e097e6093bdc769e49627b67e07f0fb012fbc081007" Feb 02 14:11:38 crc kubenswrapper[4955]: E0202 14:11:38.334320 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c64ced0f7d3b0ad71a8c7e097e6093bdc769e49627b67e07f0fb012fbc081007\": container with ID starting with c64ced0f7d3b0ad71a8c7e097e6093bdc769e49627b67e07f0fb012fbc081007 not found: ID does not exist" containerID="c64ced0f7d3b0ad71a8c7e097e6093bdc769e49627b67e07f0fb012fbc081007" Feb 02 14:11:38 crc kubenswrapper[4955]: I0202 14:11:38.334365 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c64ced0f7d3b0ad71a8c7e097e6093bdc769e49627b67e07f0fb012fbc081007"} err="failed to get container status \"c64ced0f7d3b0ad71a8c7e097e6093bdc769e49627b67e07f0fb012fbc081007\": rpc error: code = NotFound desc = could not find container \"c64ced0f7d3b0ad71a8c7e097e6093bdc769e49627b67e07f0fb012fbc081007\": container with ID starting with c64ced0f7d3b0ad71a8c7e097e6093bdc769e49627b67e07f0fb012fbc081007 not found: ID does not exist" Feb 02 14:11:38 crc kubenswrapper[4955]: I0202 14:11:38.334396 4955 scope.go:117] "RemoveContainer" containerID="4cdd9b7c5fe3d5e76cca2296de628a05a80f696e7865f807402b2cb460e8ff57" Feb 02 14:11:38 crc kubenswrapper[4955]: E0202 14:11:38.334672 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cdd9b7c5fe3d5e76cca2296de628a05a80f696e7865f807402b2cb460e8ff57\": container with ID starting with 4cdd9b7c5fe3d5e76cca2296de628a05a80f696e7865f807402b2cb460e8ff57 not found: ID does not exist" containerID="4cdd9b7c5fe3d5e76cca2296de628a05a80f696e7865f807402b2cb460e8ff57" Feb 02 14:11:38 crc kubenswrapper[4955]: I0202 14:11:38.334706 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cdd9b7c5fe3d5e76cca2296de628a05a80f696e7865f807402b2cb460e8ff57"} err="failed to get container status \"4cdd9b7c5fe3d5e76cca2296de628a05a80f696e7865f807402b2cb460e8ff57\": rpc error: code = NotFound desc = could not find container \"4cdd9b7c5fe3d5e76cca2296de628a05a80f696e7865f807402b2cb460e8ff57\": container with ID starting with 4cdd9b7c5fe3d5e76cca2296de628a05a80f696e7865f807402b2cb460e8ff57 not found: ID does not exist" Feb 02 14:11:39 crc kubenswrapper[4955]: I0202 14:11:39.727427 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="958c54c3-2d4c-46ef-96e8-405d55991ecd" path="/var/lib/kubelet/pods/958c54c3-2d4c-46ef-96e8-405d55991ecd/volumes" Feb 02 14:12:33 crc kubenswrapper[4955]: I0202 14:12:33.017182 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:12:33 crc kubenswrapper[4955]: I0202 14:12:33.017753 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:12:42 crc kubenswrapper[4955]: I0202 14:12:42.137760 4955 scope.go:117] "RemoveContainer" containerID="db60ea4e59d921f5209b41fe0edc8bfcf6e1aebdc9dbad06f0c0cb4eb8bb169f" Feb 02 14:12:44 crc kubenswrapper[4955]: I0202 14:12:44.866750 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5ddpm"] Feb 02 14:12:44 crc kubenswrapper[4955]: E0202 14:12:44.867530 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958c54c3-2d4c-46ef-96e8-405d55991ecd" containerName="extract-content" Feb 02 14:12:44 crc kubenswrapper[4955]: I0202 14:12:44.867546 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="958c54c3-2d4c-46ef-96e8-405d55991ecd" containerName="extract-content" Feb 02 14:12:44 crc kubenswrapper[4955]: E0202 14:12:44.867593 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958c54c3-2d4c-46ef-96e8-405d55991ecd" containerName="registry-server" Feb 02 14:12:44 crc kubenswrapper[4955]: I0202 14:12:44.867602 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="958c54c3-2d4c-46ef-96e8-405d55991ecd" containerName="registry-server" Feb 02 14:12:44 crc kubenswrapper[4955]: E0202 14:12:44.867625 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958c54c3-2d4c-46ef-96e8-405d55991ecd" containerName="extract-utilities" Feb 02 14:12:44 crc kubenswrapper[4955]: I0202 14:12:44.867635 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="958c54c3-2d4c-46ef-96e8-405d55991ecd" containerName="extract-utilities" Feb 02 14:12:44 crc kubenswrapper[4955]: I0202 14:12:44.867841 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="958c54c3-2d4c-46ef-96e8-405d55991ecd" containerName="registry-server" Feb 02 14:12:44 crc kubenswrapper[4955]: I0202 14:12:44.870021 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5ddpm" Feb 02 14:12:44 crc kubenswrapper[4955]: I0202 14:12:44.923269 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5ddpm"] Feb 02 14:12:44 crc kubenswrapper[4955]: I0202 14:12:44.950055 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e30d77ee-4da0-4420-a75b-377e1bbbc63f-catalog-content\") pod \"redhat-operators-5ddpm\" (UID: \"e30d77ee-4da0-4420-a75b-377e1bbbc63f\") " pod="openshift-marketplace/redhat-operators-5ddpm" Feb 02 14:12:44 crc kubenswrapper[4955]: I0202 14:12:44.950144 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e30d77ee-4da0-4420-a75b-377e1bbbc63f-utilities\") pod \"redhat-operators-5ddpm\" (UID: \"e30d77ee-4da0-4420-a75b-377e1bbbc63f\") " pod="openshift-marketplace/redhat-operators-5ddpm" Feb 02 14:12:44 crc kubenswrapper[4955]: I0202 14:12:44.950292 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c98sm\" (UniqueName: \"kubernetes.io/projected/e30d77ee-4da0-4420-a75b-377e1bbbc63f-kube-api-access-c98sm\") pod \"redhat-operators-5ddpm\" (UID: \"e30d77ee-4da0-4420-a75b-377e1bbbc63f\") " pod="openshift-marketplace/redhat-operators-5ddpm" Feb 02 14:12:45 crc kubenswrapper[4955]: I0202 14:12:45.051777 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c98sm\" (UniqueName: \"kubernetes.io/projected/e30d77ee-4da0-4420-a75b-377e1bbbc63f-kube-api-access-c98sm\") pod \"redhat-operators-5ddpm\" (UID: \"e30d77ee-4da0-4420-a75b-377e1bbbc63f\") " pod="openshift-marketplace/redhat-operators-5ddpm" Feb 02 14:12:45 crc kubenswrapper[4955]: I0202 14:12:45.051849 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e30d77ee-4da0-4420-a75b-377e1bbbc63f-catalog-content\") pod \"redhat-operators-5ddpm\" (UID: \"e30d77ee-4da0-4420-a75b-377e1bbbc63f\") " pod="openshift-marketplace/redhat-operators-5ddpm" Feb 02 14:12:45 crc kubenswrapper[4955]: I0202 14:12:45.051905 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e30d77ee-4da0-4420-a75b-377e1bbbc63f-utilities\") pod \"redhat-operators-5ddpm\" (UID: \"e30d77ee-4da0-4420-a75b-377e1bbbc63f\") " pod="openshift-marketplace/redhat-operators-5ddpm" Feb 02 14:12:45 crc kubenswrapper[4955]: I0202 14:12:45.052320 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e30d77ee-4da0-4420-a75b-377e1bbbc63f-utilities\") pod \"redhat-operators-5ddpm\" (UID: \"e30d77ee-4da0-4420-a75b-377e1bbbc63f\") " pod="openshift-marketplace/redhat-operators-5ddpm" Feb 02 14:12:45 crc kubenswrapper[4955]: I0202 14:12:45.052375 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e30d77ee-4da0-4420-a75b-377e1bbbc63f-catalog-content\") pod \"redhat-operators-5ddpm\" (UID: \"e30d77ee-4da0-4420-a75b-377e1bbbc63f\") " pod="openshift-marketplace/redhat-operators-5ddpm" Feb 02 14:12:45 crc kubenswrapper[4955]: I0202 14:12:45.074351 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c98sm\" (UniqueName: \"kubernetes.io/projected/e30d77ee-4da0-4420-a75b-377e1bbbc63f-kube-api-access-c98sm\") pod \"redhat-operators-5ddpm\" (UID: \"e30d77ee-4da0-4420-a75b-377e1bbbc63f\") " pod="openshift-marketplace/redhat-operators-5ddpm" Feb 02 14:12:45 crc kubenswrapper[4955]: I0202 14:12:45.242486 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5ddpm" Feb 02 14:12:45 crc kubenswrapper[4955]: I0202 14:12:45.790429 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5ddpm"] Feb 02 14:12:45 crc kubenswrapper[4955]: W0202 14:12:45.794325 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode30d77ee_4da0_4420_a75b_377e1bbbc63f.slice/crio-719e8e3bfd9e11df692a6e59e51d9451b82924e8c5e60e0609dc9fd191208f3f WatchSource:0}: Error finding container 719e8e3bfd9e11df692a6e59e51d9451b82924e8c5e60e0609dc9fd191208f3f: Status 404 returned error can't find the container with id 719e8e3bfd9e11df692a6e59e51d9451b82924e8c5e60e0609dc9fd191208f3f Feb 02 14:12:46 crc kubenswrapper[4955]: I0202 14:12:46.550935 4955 generic.go:334] "Generic (PLEG): container finished" podID="e30d77ee-4da0-4420-a75b-377e1bbbc63f" containerID="de50972b26e343c0d66cb2b42931b9ec3484f00e574e1159bd9ae7e2a156e958" exitCode=0 Feb 02 14:12:46 crc kubenswrapper[4955]: I0202 14:12:46.551140 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5ddpm" event={"ID":"e30d77ee-4da0-4420-a75b-377e1bbbc63f","Type":"ContainerDied","Data":"de50972b26e343c0d66cb2b42931b9ec3484f00e574e1159bd9ae7e2a156e958"} Feb 02 14:12:46 crc kubenswrapper[4955]: I0202 14:12:46.551218 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5ddpm" event={"ID":"e30d77ee-4da0-4420-a75b-377e1bbbc63f","Type":"ContainerStarted","Data":"719e8e3bfd9e11df692a6e59e51d9451b82924e8c5e60e0609dc9fd191208f3f"} Feb 02 14:12:48 crc kubenswrapper[4955]: I0202 14:12:48.568765 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5ddpm" event={"ID":"e30d77ee-4da0-4420-a75b-377e1bbbc63f","Type":"ContainerStarted","Data":"98aff0d99cd91128405db6643189890c79e3c862b2b9ba0ae2e21b3c6cf0befd"} Feb 02 14:12:52 crc kubenswrapper[4955]: I0202 14:12:52.603846 4955 generic.go:334] "Generic (PLEG): container finished" podID="e30d77ee-4da0-4420-a75b-377e1bbbc63f" containerID="98aff0d99cd91128405db6643189890c79e3c862b2b9ba0ae2e21b3c6cf0befd" exitCode=0 Feb 02 14:12:52 crc kubenswrapper[4955]: I0202 14:12:52.603911 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5ddpm" event={"ID":"e30d77ee-4da0-4420-a75b-377e1bbbc63f","Type":"ContainerDied","Data":"98aff0d99cd91128405db6643189890c79e3c862b2b9ba0ae2e21b3c6cf0befd"} Feb 02 14:12:53 crc kubenswrapper[4955]: I0202 14:12:53.616125 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5ddpm" event={"ID":"e30d77ee-4da0-4420-a75b-377e1bbbc63f","Type":"ContainerStarted","Data":"6f3eeecc12f81568cac50251741bb428735ddbab164e2ca496e2921a6212999b"} Feb 02 14:12:53 crc kubenswrapper[4955]: I0202 14:12:53.636615 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5ddpm" podStartSLOduration=3.0650537 podStartE2EDuration="9.636593649s" podCreationTimestamp="2026-02-02 14:12:44 +0000 UTC" firstStartedPulling="2026-02-02 14:12:46.552928335 +0000 UTC m=+4217.465264785" lastFinishedPulling="2026-02-02 14:12:53.124468284 +0000 UTC m=+4224.036804734" observedRunningTime="2026-02-02 14:12:53.63340102 +0000 UTC m=+4224.545737480" watchObservedRunningTime="2026-02-02 14:12:53.636593649 +0000 UTC m=+4224.548930099" Feb 02 14:12:55 crc kubenswrapper[4955]: I0202 14:12:55.243179 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5ddpm" Feb 02 14:12:55 crc kubenswrapper[4955]: I0202 14:12:55.243623 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5ddpm" Feb 02 14:12:56 crc kubenswrapper[4955]: I0202 14:12:56.286715 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5ddpm" podUID="e30d77ee-4da0-4420-a75b-377e1bbbc63f" containerName="registry-server" probeResult="failure" output=< Feb 02 14:12:56 crc kubenswrapper[4955]: timeout: failed to connect service ":50051" within 1s Feb 02 14:12:56 crc kubenswrapper[4955]: > Feb 02 14:13:03 crc kubenswrapper[4955]: I0202 14:13:03.016374 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:13:03 crc kubenswrapper[4955]: I0202 14:13:03.017003 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:13:05 crc kubenswrapper[4955]: I0202 14:13:05.303197 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5ddpm" Feb 02 14:13:05 crc kubenswrapper[4955]: I0202 14:13:05.395256 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5ddpm" Feb 02 14:13:05 crc kubenswrapper[4955]: I0202 14:13:05.565061 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5ddpm"] Feb 02 14:13:06 crc kubenswrapper[4955]: I0202 14:13:06.737519 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5ddpm" podUID="e30d77ee-4da0-4420-a75b-377e1bbbc63f" containerName="registry-server" containerID="cri-o://6f3eeecc12f81568cac50251741bb428735ddbab164e2ca496e2921a6212999b" gracePeriod=2 Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.176741 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5ddpm" Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.209933 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c98sm\" (UniqueName: \"kubernetes.io/projected/e30d77ee-4da0-4420-a75b-377e1bbbc63f-kube-api-access-c98sm\") pod \"e30d77ee-4da0-4420-a75b-377e1bbbc63f\" (UID: \"e30d77ee-4da0-4420-a75b-377e1bbbc63f\") " Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.210076 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e30d77ee-4da0-4420-a75b-377e1bbbc63f-utilities\") pod \"e30d77ee-4da0-4420-a75b-377e1bbbc63f\" (UID: \"e30d77ee-4da0-4420-a75b-377e1bbbc63f\") " Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.210303 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e30d77ee-4da0-4420-a75b-377e1bbbc63f-catalog-content\") pod \"e30d77ee-4da0-4420-a75b-377e1bbbc63f\" (UID: \"e30d77ee-4da0-4420-a75b-377e1bbbc63f\") " Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.215964 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e30d77ee-4da0-4420-a75b-377e1bbbc63f-kube-api-access-c98sm" (OuterVolumeSpecName: "kube-api-access-c98sm") pod "e30d77ee-4da0-4420-a75b-377e1bbbc63f" (UID: "e30d77ee-4da0-4420-a75b-377e1bbbc63f"). InnerVolumeSpecName "kube-api-access-c98sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.226677 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e30d77ee-4da0-4420-a75b-377e1bbbc63f-utilities" (OuterVolumeSpecName: "utilities") pod "e30d77ee-4da0-4420-a75b-377e1bbbc63f" (UID: "e30d77ee-4da0-4420-a75b-377e1bbbc63f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.313297 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c98sm\" (UniqueName: \"kubernetes.io/projected/e30d77ee-4da0-4420-a75b-377e1bbbc63f-kube-api-access-c98sm\") on node \"crc\" DevicePath \"\"" Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.313437 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e30d77ee-4da0-4420-a75b-377e1bbbc63f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.341946 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e30d77ee-4da0-4420-a75b-377e1bbbc63f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e30d77ee-4da0-4420-a75b-377e1bbbc63f" (UID: "e30d77ee-4da0-4420-a75b-377e1bbbc63f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.415296 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e30d77ee-4da0-4420-a75b-377e1bbbc63f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.751405 4955 generic.go:334] "Generic (PLEG): container finished" podID="e30d77ee-4da0-4420-a75b-377e1bbbc63f" containerID="6f3eeecc12f81568cac50251741bb428735ddbab164e2ca496e2921a6212999b" exitCode=0 Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.751447 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5ddpm" event={"ID":"e30d77ee-4da0-4420-a75b-377e1bbbc63f","Type":"ContainerDied","Data":"6f3eeecc12f81568cac50251741bb428735ddbab164e2ca496e2921a6212999b"} Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.751473 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5ddpm" event={"ID":"e30d77ee-4da0-4420-a75b-377e1bbbc63f","Type":"ContainerDied","Data":"719e8e3bfd9e11df692a6e59e51d9451b82924e8c5e60e0609dc9fd191208f3f"} Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.751489 4955 scope.go:117] "RemoveContainer" containerID="6f3eeecc12f81568cac50251741bb428735ddbab164e2ca496e2921a6212999b" Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.751617 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5ddpm" Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.778333 4955 scope.go:117] "RemoveContainer" containerID="98aff0d99cd91128405db6643189890c79e3c862b2b9ba0ae2e21b3c6cf0befd" Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.780060 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5ddpm"] Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.789448 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5ddpm"] Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.800698 4955 scope.go:117] "RemoveContainer" containerID="de50972b26e343c0d66cb2b42931b9ec3484f00e574e1159bd9ae7e2a156e958" Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.840689 4955 scope.go:117] "RemoveContainer" containerID="6f3eeecc12f81568cac50251741bb428735ddbab164e2ca496e2921a6212999b" Feb 02 14:13:07 crc kubenswrapper[4955]: E0202 14:13:07.842389 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f3eeecc12f81568cac50251741bb428735ddbab164e2ca496e2921a6212999b\": container with ID starting with 6f3eeecc12f81568cac50251741bb428735ddbab164e2ca496e2921a6212999b not found: ID does not exist" containerID="6f3eeecc12f81568cac50251741bb428735ddbab164e2ca496e2921a6212999b" Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.842441 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f3eeecc12f81568cac50251741bb428735ddbab164e2ca496e2921a6212999b"} err="failed to get container status \"6f3eeecc12f81568cac50251741bb428735ddbab164e2ca496e2921a6212999b\": rpc error: code = NotFound desc = could not find container \"6f3eeecc12f81568cac50251741bb428735ddbab164e2ca496e2921a6212999b\": container with ID starting with 6f3eeecc12f81568cac50251741bb428735ddbab164e2ca496e2921a6212999b not found: ID does not exist" Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.842469 4955 scope.go:117] "RemoveContainer" containerID="98aff0d99cd91128405db6643189890c79e3c862b2b9ba0ae2e21b3c6cf0befd" Feb 02 14:13:07 crc kubenswrapper[4955]: E0202 14:13:07.843211 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98aff0d99cd91128405db6643189890c79e3c862b2b9ba0ae2e21b3c6cf0befd\": container with ID starting with 98aff0d99cd91128405db6643189890c79e3c862b2b9ba0ae2e21b3c6cf0befd not found: ID does not exist" containerID="98aff0d99cd91128405db6643189890c79e3c862b2b9ba0ae2e21b3c6cf0befd" Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.843259 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98aff0d99cd91128405db6643189890c79e3c862b2b9ba0ae2e21b3c6cf0befd"} err="failed to get container status \"98aff0d99cd91128405db6643189890c79e3c862b2b9ba0ae2e21b3c6cf0befd\": rpc error: code = NotFound desc = could not find container \"98aff0d99cd91128405db6643189890c79e3c862b2b9ba0ae2e21b3c6cf0befd\": container with ID starting with 98aff0d99cd91128405db6643189890c79e3c862b2b9ba0ae2e21b3c6cf0befd not found: ID does not exist" Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.843295 4955 scope.go:117] "RemoveContainer" containerID="de50972b26e343c0d66cb2b42931b9ec3484f00e574e1159bd9ae7e2a156e958" Feb 02 14:13:07 crc kubenswrapper[4955]: E0202 14:13:07.843710 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de50972b26e343c0d66cb2b42931b9ec3484f00e574e1159bd9ae7e2a156e958\": container with ID starting with de50972b26e343c0d66cb2b42931b9ec3484f00e574e1159bd9ae7e2a156e958 not found: ID does not exist" containerID="de50972b26e343c0d66cb2b42931b9ec3484f00e574e1159bd9ae7e2a156e958" Feb 02 14:13:07 crc kubenswrapper[4955]: I0202 14:13:07.843757 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de50972b26e343c0d66cb2b42931b9ec3484f00e574e1159bd9ae7e2a156e958"} err="failed to get container status \"de50972b26e343c0d66cb2b42931b9ec3484f00e574e1159bd9ae7e2a156e958\": rpc error: code = NotFound desc = could not find container \"de50972b26e343c0d66cb2b42931b9ec3484f00e574e1159bd9ae7e2a156e958\": container with ID starting with de50972b26e343c0d66cb2b42931b9ec3484f00e574e1159bd9ae7e2a156e958 not found: ID does not exist" Feb 02 14:13:09 crc kubenswrapper[4955]: I0202 14:13:09.734586 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e30d77ee-4da0-4420-a75b-377e1bbbc63f" path="/var/lib/kubelet/pods/e30d77ee-4da0-4420-a75b-377e1bbbc63f/volumes" Feb 02 14:13:33 crc kubenswrapper[4955]: I0202 14:13:33.016301 4955 patch_prober.go:28] interesting pod/machine-config-daemon-6l62h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:13:33 crc kubenswrapper[4955]: I0202 14:13:33.016872 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:13:33 crc kubenswrapper[4955]: I0202 14:13:33.016918 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" Feb 02 14:13:33 crc kubenswrapper[4955]: I0202 14:13:33.017501 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"28f2a4b51b84392a2a442f7abba3b9ae56f3e4829ff8b989a19adcbad6f16d0d"} pod="openshift-machine-config-operator/machine-config-daemon-6l62h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:13:33 crc kubenswrapper[4955]: I0202 14:13:33.017556 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerName="machine-config-daemon" containerID="cri-o://28f2a4b51b84392a2a442f7abba3b9ae56f3e4829ff8b989a19adcbad6f16d0d" gracePeriod=600 Feb 02 14:13:33 crc kubenswrapper[4955]: E0202 14:13:33.143368 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:13:33 crc kubenswrapper[4955]: I0202 14:13:33.993529 4955 generic.go:334] "Generic (PLEG): container finished" podID="f2f37534-569f-4b2e-989a-f95866cb79e7" containerID="28f2a4b51b84392a2a442f7abba3b9ae56f3e4829ff8b989a19adcbad6f16d0d" exitCode=0 Feb 02 14:13:33 crc kubenswrapper[4955]: I0202 14:13:33.993642 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" event={"ID":"f2f37534-569f-4b2e-989a-f95866cb79e7","Type":"ContainerDied","Data":"28f2a4b51b84392a2a442f7abba3b9ae56f3e4829ff8b989a19adcbad6f16d0d"} Feb 02 14:13:33 crc kubenswrapper[4955]: I0202 14:13:33.993703 4955 scope.go:117] "RemoveContainer" containerID="917fe2126cbb3a7d8937c8ede6f6e0eb83a794f53a436a37d5cee320374819f9" Feb 02 14:13:33 crc kubenswrapper[4955]: I0202 14:13:33.994635 4955 scope.go:117] "RemoveContainer" containerID="28f2a4b51b84392a2a442f7abba3b9ae56f3e4829ff8b989a19adcbad6f16d0d" Feb 02 14:13:33 crc kubenswrapper[4955]: E0202 14:13:33.995016 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:13:46 crc kubenswrapper[4955]: I0202 14:13:46.716994 4955 scope.go:117] "RemoveContainer" containerID="28f2a4b51b84392a2a442f7abba3b9ae56f3e4829ff8b989a19adcbad6f16d0d" Feb 02 14:13:46 crc kubenswrapper[4955]: E0202 14:13:46.718341 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:13:57 crc kubenswrapper[4955]: I0202 14:13:57.716715 4955 scope.go:117] "RemoveContainer" containerID="28f2a4b51b84392a2a442f7abba3b9ae56f3e4829ff8b989a19adcbad6f16d0d" Feb 02 14:13:57 crc kubenswrapper[4955]: E0202 14:13:57.717430 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:14:08 crc kubenswrapper[4955]: I0202 14:14:08.716455 4955 scope.go:117] "RemoveContainer" containerID="28f2a4b51b84392a2a442f7abba3b9ae56f3e4829ff8b989a19adcbad6f16d0d" Feb 02 14:14:08 crc kubenswrapper[4955]: E0202 14:14:08.717270 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:14:19 crc kubenswrapper[4955]: I0202 14:14:19.723791 4955 scope.go:117] "RemoveContainer" containerID="28f2a4b51b84392a2a442f7abba3b9ae56f3e4829ff8b989a19adcbad6f16d0d" Feb 02 14:14:19 crc kubenswrapper[4955]: E0202 14:14:19.724532 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7" Feb 02 14:14:32 crc kubenswrapper[4955]: I0202 14:14:32.717440 4955 scope.go:117] "RemoveContainer" containerID="28f2a4b51b84392a2a442f7abba3b9ae56f3e4829ff8b989a19adcbad6f16d0d" Feb 02 14:14:32 crc kubenswrapper[4955]: E0202 14:14:32.722939 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6l62h_openshift-machine-config-operator(f2f37534-569f-4b2e-989a-f95866cb79e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-6l62h" podUID="f2f37534-569f-4b2e-989a-f95866cb79e7"